modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-20 06:26:59
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 429
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-20 06:26:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
glif-loradex-trainer/swapagrawal14_anamika_unknown_reincarnation | glif-loradex-trainer | "2024-11-18T14:26:59Z" | 16 | 1 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | "2024-11-18T14:26:22Z" | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1731939846111__000003000_0.jpg
text: entrapped in a curse anamika_haunts
- output:
url: samples/1731939871024__000003000_1.jpg
text: haunted, red anamika_haunts
- output:
url: samples/1731939895965__000003000_2.jpg
text: a girl shadow chasing a mananamika_haunts
- output:
url: samples/1731939920916__000003000_3.jpg
text: wounded man anamika_haunts
- output:
url: samples/1731939945918__000003000_4.jpg
text: doll anamika_haunts
- output:
url: samples/1731939971042__000003000_5.jpg
text: witchcraft anamika_haunts
base_model: black-forest-labs/FLUX.1-dev
trigger: anamika_haunts
instance_prompt: anamika_haunts
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# anamika_unknown_reincarnation
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `swapagrawal14`.
<Gallery />
## Trigger words
You should use `anamika_haunts` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/swapagrawal14_anamika_unknown_reincarnation/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
alitolga/electra-base-generator-rank4 | alitolga | "2024-02-12T13:36:31Z" | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:google/electra-base-generator",
"base_model:finetune:google/electra-base-generator",
"license:apache-2.0",
"region:us"
] | null | "2024-02-12T13:35:29Z" | ---
license: apache-2.0
base_model: google/electra-base-generator
tags:
- generated_from_trainer
model-index:
- name: electra-base-generator-rank4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-generator-rank4
This model is a fine-tuned version of [google/electra-base-generator](https://huggingface.co/google/electra-base-generator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2603
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 8.3543 | 1.0 | 179 | 3.9048 |
| 3.7115 | 2.0 | 358 | 3.3385 |
| 3.4042 | 3.0 | 537 | 3.2603 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
davidschulte/ESM_kuroneko5943__snap21_Pet_Supplies_5 | davidschulte | "2025-03-28T12:57:55Z" | 24 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:kuroneko5943/snap21",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-12-06T11:06:48Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
ramyanjn/ChemBERTaFTTox | ramyanjn | "2024-05-27T07:27:47Z" | 163 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-27T07:27:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ali2066/finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58 | ali2066 | "2022-02-27T16:44:27Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4095
- Accuracy: 0.8263
- F1: 0.8865
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 |
| No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 |
| 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 |
| 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 |
| 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
UmerHA/ConrolNetXS-SDXL-canny | UmerHA | "2023-12-04T15:58:41Z" | 21 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:2302.05543",
"license:openrail",
"region:us"
] | null | "2023-11-13T17:27:59Z" | ---
license: openrail
---
# ControlNet-XS model for StableDiffusionXL and canny edges input
🔬 Original paper and models by https://github.com/vislearn/ControlNet-XS
👷🏽♂️ Translated into diffusers architecture by https://twitter.com/UmerHAdil
This model is trained for use with [StableDiffusionXL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
---
ControlNet-XS was introduced in [ControlNet-XS](https://vislearn.github.io/ControlNet-XS/) by Denis Zavadski and Carsten Rother. It is based on the observation that the control model in the [original ControlNet](https://huggingface.co/papers/2302.05543) can be made much smaller and still produces good results.
As with the original ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that'll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.
Using ControlNet-XS instead of regular ControlNet will produce images of roughly the same quality, but 20-25% faster ([see benchmark](https://github.com/UmerHA/controlnet-xs-benchmark/blob/main/Speed%20Benchmark.ipynb)) and with ~45% less memory usage.
---
Other ControlNet-XS models:
- [StableDiffusion-XL and depth input](https://huggingface.co/UmerHA/ConrolNetXS-SDXL-depth)
- [StableDiffusion 2.1 and canny edges input](https://huggingface.co/UmerHA/ConrolNetXS-SD2.1-canny)
- [StableDiffusion 2.1 and depth input](https://huggingface.co/UmerHA/ConrolNetXS-SD2.1-depth) |
Falafelki/MyshkinMix | Falafelki | "2025-03-02T09:43:49Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-03-02T09:27:41Z" | ---
license: apache-2.0
---
|
CLMBR/full-transformer-3 | CLMBR | "2024-02-03T08:28:16Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-26T10:06:44Z" | ---
tags:
- generated_from_trainer
model-index:
- name: full2-transformer-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# full2-transformer-3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.2206 | 0.03 | 76320 | 4.1916 |
| 4.0169 | 1.03 | 152640 | 4.0236 |
| 3.9099 | 0.03 | 228960 | 3.9506 |
| 3.8437 | 1.03 | 305280 | 3.9106 |
| 3.7918 | 0.03 | 381600 | 3.8857 |
| 3.7519 | 1.03 | 457920 | 3.8689 |
| 3.7218 | 0.03 | 534240 | 3.8581 |
| 3.6904 | 1.03 | 610560 | 3.8518 |
| 3.6603 | 0.03 | 686880 | 3.8468 |
| 3.6377 | 1.03 | 763200 | 3.8447 |
| 3.6135 | 0.03 | 839520 | 3.8432 |
| 3.5916 | 1.03 | 915840 | 3.8415 |
| 3.5781 | 0.03 | 992160 | 3.8417 |
| 3.5586 | 1.03 | 1068480 | 3.8418 |
| 3.5407 | 0.03 | 1144800 | 3.8439 |
| 3.525 | 1.03 | 1221120 | 3.8447 |
| 3.5057 | 0.03 | 1297440 | 3.8447 |
| 3.4938 | 1.03 | 1373760 | 3.8463 |
| 3.4784 | 0.03 | 1450080 | 3.8474 |
| 3.4732 | 1.03 | 1526400 | 3.8485 |
| 3.4634 | 0.03 | 1602720 | 3.8501 |
| 3.4544 | 1.03 | 1679040 | 3.8525 |
| 3.448 | 0.03 | 1755360 | 3.8527 |
| 3.4382 | 0.03 | 1831680 | 3.8545 |
| 3.4259 | 0.03 | 1908000 | 3.8566 |
| 3.4159 | 1.03 | 1984320 | 3.8575 |
| 3.4029 | 0.03 | 2060640 | 3.8589 |
| 3.3911 | 0.03 | 2136960 | 3.8601 |
| 3.3832 | 0.03 | 2213280 | 3.8616 |
| 3.3725 | 0.03 | 2289600 | 3.8614 |
| 3.3585 | 1.03 | 2365920 | 3.8622 |
| 3.3487 | 0.03 | 2442240 | 3.8639 |
| 3.3357 | 1.03 | 2518560 | 3.8639 |
| 3.3261 | 0.03 | 2594880 | 3.8644 |
| 3.3146 | 0.03 | 2671200 | 3.8653 |
| 3.3102 | 1.03 | 2747520 | 3.8654 |
| 3.3041 | 0.03 | 2823840 | 3.8652 |
| 3.2998 | 1.03 | 2900160 | 3.8649 |
| 3.2998 | 0.03 | 2976480 | 3.8644 |
| 3.2926 | 1.02 | 3052726 | 3.8634 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
eugeneware/ddpm-butterflies-128 | eugeneware | "2022-08-13T16:14:52Z" | 4 | 0 | diffusers | [
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | "2022-08-13T15:45:43Z" | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/eugeneware/ddpm-butterflies-128/tensorboard?#scalars)
|
satyanshu404/long-t5-local-base-finetuned-justification-v09 | satyanshu404 | "2024-04-13T05:24:14Z" | 31 | 0 | transformers | [
"transformers",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/long-t5-local-base",
"base_model:finetune:google/long-t5-local-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-12T03:22:05Z" | ---
license: apache-2.0
base_model: google/long-t5-local-base
tags:
- generated_from_trainer
model-index:
- name: long-t5-local-base-finetuned-justification-v09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# long-t5-local-base-finetuned-justification-v09
This model is a fine-tuned version of [google/long-t5-local-base](https://huggingface.co/google/long-t5-local-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3147
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-07
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 338 | 20.0779 |
| 26.617 | 2.0 | 676 | 17.6054 |
| 22.6857 | 3.0 | 1014 | 15.1205 |
| 22.6857 | 4.0 | 1352 | 12.4837 |
| 18.639 | 5.0 | 1690 | 9.9114 |
| 14.4577 | 6.0 | 2028 | 8.0629 |
| 14.4577 | 7.0 | 2366 | 7.5255 |
| 10.7004 | 8.0 | 2704 | 7.4006 |
| 7.8669 | 9.0 | 3042 | 7.2827 |
| 7.8669 | 10.0 | 3380 | 7.1306 |
| 6.3058 | 11.0 | 3718 | 6.9313 |
| 5.3507 | 12.0 | 4056 | 6.6880 |
| 5.3507 | 13.0 | 4394 | 6.3980 |
| 5.0661 | 14.0 | 4732 | 6.1019 |
| 4.6576 | 15.0 | 5070 | 5.7985 |
| 4.6576 | 16.0 | 5408 | 5.4902 |
| 4.374 | 17.0 | 5746 | 5.2013 |
| 4.1022 | 18.0 | 6084 | 4.9162 |
| 4.1022 | 19.0 | 6422 | 4.6802 |
| 3.9773 | 20.0 | 6760 | 4.4889 |
| 3.7391 | 21.0 | 7098 | 4.3299 |
| 3.7391 | 22.0 | 7436 | 4.2127 |
| 3.6007 | 23.0 | 7774 | 4.1193 |
| 3.472 | 24.0 | 8112 | 4.0468 |
| 3.472 | 25.0 | 8450 | 3.9895 |
| 3.3327 | 26.0 | 8788 | 3.9357 |
| 3.3196 | 27.0 | 9126 | 3.8895 |
| 3.3196 | 28.0 | 9464 | 3.8449 |
| 3.229 | 29.0 | 9802 | 3.8026 |
| 3.1795 | 30.0 | 10140 | 3.7613 |
| 3.1795 | 31.0 | 10478 | 3.7200 |
| 3.0775 | 32.0 | 10816 | 3.6811 |
| 3.065 | 33.0 | 11154 | 3.6424 |
| 3.065 | 34.0 | 11492 | 3.6048 |
| 3.0145 | 35.0 | 11830 | 3.5750 |
| 2.9987 | 36.0 | 12168 | 3.5381 |
| 2.9096 | 37.0 | 12506 | 3.5031 |
| 2.9096 | 38.0 | 12844 | 3.4699 |
| 2.8816 | 39.0 | 13182 | 3.4402 |
| 2.8767 | 40.0 | 13520 | 3.4116 |
| 2.8767 | 41.0 | 13858 | 3.3847 |
| 2.8189 | 42.0 | 14196 | 3.3540 |
| 2.8297 | 43.0 | 14534 | 3.3275 |
| 2.8297 | 44.0 | 14872 | 3.3008 |
| 2.7376 | 45.0 | 15210 | 3.2745 |
| 2.7519 | 46.0 | 15548 | 3.2521 |
| 2.7519 | 47.0 | 15886 | 3.2273 |
| 2.7207 | 48.0 | 16224 | 3.2038 |
| 2.7056 | 49.0 | 16562 | 3.1822 |
| 2.7056 | 50.0 | 16900 | 3.1619 |
| 2.6539 | 51.0 | 17238 | 3.1426 |
| 2.6393 | 52.0 | 17576 | 3.1219 |
| 2.6393 | 53.0 | 17914 | 3.1015 |
| 2.6396 | 54.0 | 18252 | 3.0818 |
| 2.6029 | 55.0 | 18590 | 3.0604 |
| 2.6029 | 56.0 | 18928 | 3.0448 |
| 2.5527 | 57.0 | 19266 | 3.0251 |
| 2.5793 | 58.0 | 19604 | 3.0069 |
| 2.5793 | 59.0 | 19942 | 2.9911 |
| 2.5443 | 60.0 | 20280 | 2.9724 |
| 2.5083 | 61.0 | 20618 | 2.9560 |
| 2.5083 | 62.0 | 20956 | 2.9387 |
| 2.5368 | 63.0 | 21294 | 2.9205 |
| 2.4771 | 64.0 | 21632 | 2.9040 |
| 2.4771 | 65.0 | 21970 | 2.8895 |
| 2.4875 | 66.0 | 22308 | 2.8701 |
| 2.4532 | 67.0 | 22646 | 2.8570 |
| 2.4532 | 68.0 | 22984 | 2.8397 |
| 2.4276 | 69.0 | 23322 | 2.8243 |
| 2.4279 | 70.0 | 23660 | 2.8110 |
| 2.4279 | 71.0 | 23998 | 2.7950 |
| 2.3944 | 72.0 | 24336 | 2.7816 |
| 2.3907 | 73.0 | 24674 | 2.7704 |
| 2.4014 | 74.0 | 25012 | 2.7564 |
| 2.4014 | 75.0 | 25350 | 2.7423 |
| 2.3698 | 76.0 | 25688 | 2.7295 |
| 2.3408 | 77.0 | 26026 | 2.7172 |
| 2.3408 | 78.0 | 26364 | 2.7046 |
| 2.3404 | 79.0 | 26702 | 2.6916 |
| 2.316 | 80.0 | 27040 | 2.6827 |
| 2.316 | 81.0 | 27378 | 2.6706 |
| 2.3322 | 82.0 | 27716 | 2.6607 |
| 2.3005 | 83.0 | 28054 | 2.6500 |
| 2.3005 | 84.0 | 28392 | 2.6408 |
| 2.2661 | 85.0 | 28730 | 2.6315 |
| 2.2946 | 86.0 | 29068 | 2.6231 |
| 2.2946 | 87.0 | 29406 | 2.6131 |
| 2.2493 | 88.0 | 29744 | 2.6034 |
| 2.2623 | 89.0 | 30082 | 2.5940 |
| 2.2623 | 90.0 | 30420 | 2.5857 |
| 2.2464 | 91.0 | 30758 | 2.5777 |
| 2.2203 | 92.0 | 31096 | 2.5714 |
| 2.2203 | 93.0 | 31434 | 2.5641 |
| 2.233 | 94.0 | 31772 | 2.5562 |
| 2.2101 | 95.0 | 32110 | 2.5493 |
| 2.2101 | 96.0 | 32448 | 2.5435 |
| 2.2321 | 97.0 | 32786 | 2.5376 |
| 2.1743 | 98.0 | 33124 | 2.5304 |
| 2.1743 | 99.0 | 33462 | 2.5253 |
| 2.2033 | 100.0 | 33800 | 2.5202 |
| 2.1874 | 101.0 | 34138 | 2.5154 |
| 2.1874 | 102.0 | 34476 | 2.5092 |
| 2.1615 | 103.0 | 34814 | 2.5054 |
| 2.1565 | 104.0 | 35152 | 2.5001 |
| 2.1565 | 105.0 | 35490 | 2.4950 |
| 2.152 | 106.0 | 35828 | 2.4897 |
| 2.1398 | 107.0 | 36166 | 2.4851 |
| 2.1424 | 108.0 | 36504 | 2.4812 |
| 2.1424 | 109.0 | 36842 | 2.4767 |
| 2.1272 | 110.0 | 37180 | 2.4734 |
| 2.1171 | 111.0 | 37518 | 2.4686 |
| 2.1171 | 112.0 | 37856 | 2.4649 |
| 2.1325 | 113.0 | 38194 | 2.4597 |
| 2.0975 | 114.0 | 38532 | 2.4567 |
| 2.0975 | 115.0 | 38870 | 2.4523 |
| 2.1156 | 116.0 | 39208 | 2.4487 |
| 2.0628 | 117.0 | 39546 | 2.4452 |
| 2.0628 | 118.0 | 39884 | 2.4417 |
| 2.1061 | 119.0 | 40222 | 2.4385 |
| 2.0897 | 120.0 | 40560 | 2.4343 |
| 2.0897 | 121.0 | 40898 | 2.4316 |
| 2.083 | 122.0 | 41236 | 2.4271 |
| 2.0693 | 123.0 | 41574 | 2.4241 |
| 2.0693 | 124.0 | 41912 | 2.4212 |
| 2.0748 | 125.0 | 42250 | 2.4180 |
| 2.0497 | 126.0 | 42588 | 2.4152 |
| 2.0497 | 127.0 | 42926 | 2.4128 |
| 2.0803 | 128.0 | 43264 | 2.4098 |
| 2.0701 | 129.0 | 43602 | 2.4060 |
| 2.0701 | 130.0 | 43940 | 2.4032 |
| 2.0358 | 131.0 | 44278 | 2.4010 |
| 2.0487 | 132.0 | 44616 | 2.3981 |
| 2.0487 | 133.0 | 44954 | 2.3956 |
| 2.0402 | 134.0 | 45292 | 2.3927 |
| 2.0425 | 135.0 | 45630 | 2.3895 |
| 2.0425 | 136.0 | 45968 | 2.3873 |
| 2.0379 | 137.0 | 46306 | 2.3844 |
| 2.0297 | 138.0 | 46644 | 2.3818 |
| 2.0297 | 139.0 | 46982 | 2.3785 |
| 2.046 | 140.0 | 47320 | 2.3766 |
| 2.0066 | 141.0 | 47658 | 2.3739 |
| 2.0066 | 142.0 | 47996 | 2.3712 |
| 2.0186 | 143.0 | 48334 | 2.3696 |
| 2.0474 | 144.0 | 48672 | 2.3669 |
| 1.9858 | 145.0 | 49010 | 2.3652 |
| 1.9858 | 146.0 | 49348 | 2.3631 |
| 2.0216 | 147.0 | 49686 | 2.3609 |
| 1.9961 | 148.0 | 50024 | 2.3588 |
| 1.9961 | 149.0 | 50362 | 2.3573 |
| 1.9873 | 150.0 | 50700 | 2.3554 |
| 2.0043 | 151.0 | 51038 | 2.3530 |
| 2.0043 | 152.0 | 51376 | 2.3508 |
| 2.0045 | 153.0 | 51714 | 2.3490 |
| 1.9951 | 154.0 | 52052 | 2.3475 |
| 1.9951 | 155.0 | 52390 | 2.3458 |
| 2.02 | 156.0 | 52728 | 2.3448 |
| 1.9924 | 157.0 | 53066 | 2.3429 |
| 1.9924 | 158.0 | 53404 | 2.3410 |
| 1.9757 | 159.0 | 53742 | 2.3398 |
| 1.9882 | 160.0 | 54080 | 2.3383 |
| 1.9882 | 161.0 | 54418 | 2.3368 |
| 2.0006 | 162.0 | 54756 | 2.3355 |
| 1.9984 | 163.0 | 55094 | 2.3341 |
| 1.9984 | 164.0 | 55432 | 2.3331 |
| 1.9823 | 165.0 | 55770 | 2.3318 |
| 1.9548 | 166.0 | 56108 | 2.3309 |
| 1.9548 | 167.0 | 56446 | 2.3297 |
| 1.9812 | 168.0 | 56784 | 2.3288 |
| 1.9793 | 169.0 | 57122 | 2.3276 |
| 1.9793 | 170.0 | 57460 | 2.3264 |
| 2.0022 | 171.0 | 57798 | 2.3255 |
| 1.9593 | 172.0 | 58136 | 2.3248 |
| 1.9593 | 173.0 | 58474 | 2.3236 |
| 1.9756 | 174.0 | 58812 | 2.3228 |
| 1.9835 | 175.0 | 59150 | 2.3221 |
| 1.9835 | 176.0 | 59488 | 2.3214 |
| 1.9655 | 177.0 | 59826 | 2.3208 |
| 1.9712 | 178.0 | 60164 | 2.3202 |
| 1.9658 | 179.0 | 60502 | 2.3195 |
| 1.9658 | 180.0 | 60840 | 2.3188 |
| 1.9501 | 181.0 | 61178 | 2.3185 |
| 1.992 | 182.0 | 61516 | 2.3180 |
| 1.992 | 183.0 | 61854 | 2.3176 |
| 1.9784 | 184.0 | 62192 | 2.3172 |
| 1.968 | 185.0 | 62530 | 2.3169 |
| 1.968 | 186.0 | 62868 | 2.3165 |
| 1.9746 | 187.0 | 63206 | 2.3161 |
| 1.9615 | 188.0 | 63544 | 2.3159 |
| 1.9615 | 189.0 | 63882 | 2.3157 |
| 1.9405 | 190.0 | 64220 | 2.3155 |
| 1.9869 | 191.0 | 64558 | 2.3153 |
| 1.9869 | 192.0 | 64896 | 2.3152 |
| 1.9614 | 193.0 | 65234 | 2.3150 |
| 1.9641 | 194.0 | 65572 | 2.3149 |
| 1.9641 | 195.0 | 65910 | 2.3148 |
| 1.9813 | 196.0 | 66248 | 2.3148 |
| 1.9676 | 197.0 | 66586 | 2.3147 |
| 1.9676 | 198.0 | 66924 | 2.3147 |
| 1.9302 | 199.0 | 67262 | 2.3147 |
| 1.99 | 200.0 | 67600 | 2.3147 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
thewordsmiths/Mistral-7B-v0.3_sft_LoRA_100000_dpo_LoRA | thewordsmiths | "2024-06-03T04:27:35Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:finetune:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-03T04:27:17Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: mistralai/Mistral-7B-v0.3
---
# Uploaded model
- **Developed by:** thewordsmiths
- **License:** apache-2.0
- **Finetuned from model :** mistralai/Mistral-7B-v0.3
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/Qwen2-7B-sft-hhrlhf-gen-dpo-GGUF | mradermacher | "2025-02-05T10:50:47Z" | 261 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"dpo",
"en",
"base_model:AmberYifan/Qwen2-7B-sft-hhrlhf-gen-dpo",
"base_model:quantized:AmberYifan/Qwen2-7B-sft-hhrlhf-gen-dpo",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-05T09:57:28Z" | ---
base_model: AmberYifan/Qwen2-7B-sft-hhrlhf-gen-dpo
language:
- en
library_name: transformers
model_name: Qwen2-7B-sft-hhrlhf-gen-dpo
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- dpo
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AmberYifan/Qwen2-7B-sft-hhrlhf-gen-dpo
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-sft-hhrlhf-gen-dpo-GGUF/resolve/main/Qwen2-7B-sft-hhrlhf-gen-dpo.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-sft-hhrlhf-gen-dpo-GGUF/resolve/main/Qwen2-7B-sft-hhrlhf-gen-dpo.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-sft-hhrlhf-gen-dpo-GGUF/resolve/main/Qwen2-7B-sft-hhrlhf-gen-dpo.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-sft-hhrlhf-gen-dpo-GGUF/resolve/main/Qwen2-7B-sft-hhrlhf-gen-dpo.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-sft-hhrlhf-gen-dpo-GGUF/resolve/main/Qwen2-7B-sft-hhrlhf-gen-dpo.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-sft-hhrlhf-gen-dpo-GGUF/resolve/main/Qwen2-7B-sft-hhrlhf-gen-dpo.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-sft-hhrlhf-gen-dpo-GGUF/resolve/main/Qwen2-7B-sft-hhrlhf-gen-dpo.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-sft-hhrlhf-gen-dpo-GGUF/resolve/main/Qwen2-7B-sft-hhrlhf-gen-dpo.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-sft-hhrlhf-gen-dpo-GGUF/resolve/main/Qwen2-7B-sft-hhrlhf-gen-dpo.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-sft-hhrlhf-gen-dpo-GGUF/resolve/main/Qwen2-7B-sft-hhrlhf-gen-dpo.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-sft-hhrlhf-gen-dpo-GGUF/resolve/main/Qwen2-7B-sft-hhrlhf-gen-dpo.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-7B-sft-hhrlhf-gen-dpo-GGUF/resolve/main/Qwen2-7B-sft-hhrlhf-gen-dpo.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/AlphaHitchhiker-7B-i1-GGUF | mradermacher | "2024-12-22T10:28:14Z" | 75 | 1 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"sft",
"en",
"dataset:migtissera/Hitchhiker",
"base_model:macadeliccc/AlphaHitchhiker-7B",
"base_model:quantized:macadeliccc/AlphaHitchhiker-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-12-22T09:51:04Z" | ---
base_model: macadeliccc/AlphaHitchhiker-7B
datasets: migtissera/Hitchhiker
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/macadeliccc/AlphaHitchhiker-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/AlphaHitchhiker-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/AlphaHitchhiker-7B-i1-GGUF/resolve/main/AlphaHitchhiker-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nota-ai/cpt-lora_st-vicuna-v1.3-1.5b-ppl | nota-ai | "2024-07-23T01:03:23Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"arxiv:2402.02834",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-04T07:00:09Z" | # Shortened LLM Model Card
Shortened LLM is a depth-pruned version of large language models for efficient text generation.
- **Developed by:** [Nota AI](https://www.nota.ai/)
- **License:** Non-commercial license
- **Repository:** https://github.com/Nota-NetsPresso/shortened-llm
- **Paper:** https://arxiv.org/abs/2402.02834
## Compression Method
* After identifying unimportant Transformer blocks, we perform **one-shot pruning**.
* In retraining pruned models for quality recovery, we leverage **continued pretraining (CPT)**, which involves updating all parameters, on a large-scale pretraining corpus.
* Once CPT is completed, the model in this card is further finetuned with **low-rank adaptation (LoRA)** on an instruction tuning dataset.
## Models from Aggressive Pruning & CPT Retraining (arXiv-v2):
| Source<br>Model | Pruning<br>Ratio | Pruning<br>Criterion | Retraining<br>Method | HF Models<br>Link |
|:---:|:---:|:---:|:---:| :---:|
| Vicuna-v1.3-7B | 20% | PPL | CPT | [nota-ai/cpt_st-vicuna-v1.3-5.5b-ppl](https://huggingface.co/nota-ai/cpt_st-vicuna-v1.3-5.5b-ppl) |
| Vicuna-v1.3-7B | 45% | PPL | CPT | [nota-ai/cpt_st-vicuna-v1.3-3.7b-ppl](https://huggingface.co/nota-ai/cpt_st-vicuna-v1.3-3.7b-ppl) |
| Vicuna-v1.3-7B | 60% | PPL | CPT | [nota-ai/cpt_st-vicuna-v1.3-2.7b-ppl](https://huggingface.co/nota-ai/cpt_st-vicuna-v1.3-2.7b-ppl) |
| Vicuna-v1.3-7B | 80% | PPL | CPT | [nota-ai/cpt_st-vicuna-v1.3-1.5b-ppl](https://huggingface.co/nota-ai/cpt_st-vicuna-v1.3-1.5b-ppl) |
| Vicuna-v1.3-7B | 20% | PPL | CPT⇒LoRA | [nota-ai/cpt-lora_st-vicuna-v1.3-5.5b-ppl](https://huggingface.co/nota-ai/cpt-lora_st-vicuna-v1.3-5.5b-ppl) |
| Vicuna-v1.3-7B | 45% | PPL | CPT⇒LoRA | [nota-ai/cpt-lora_st-vicuna-v1.3-3.7b-ppl](https://huggingface.co/nota-ai/cpt-lora_st-vicuna-v1.3-3.7b-ppl) |
| Vicuna-v1.3-7B | 60% | PPL | CPT⇒LoRA | [nota-ai/cpt-lora_st-vicuna-v1.3-2.7b-ppl](https://huggingface.co/nota-ai/cpt-lora_st-vicuna-v1.3-2.7b-ppl) |
| Vicuna-v1.3-7B | 80% | PPL | CPT⇒LoRA | [nota-ai/cpt-lora_st-vicuna-v1.3-1.5b-ppl](https://huggingface.co/nota-ai/cpt-lora_st-vicuna-v1.3-1.5b-ppl) |
<details>
<summary>
Click to see the results:
</summary>
- EleutherAI/lm-evaluation-harness version [3326c54](https://github.com/EleutherAI/lm-evaluation-harness/tree/3326c547a733d598b4377e54be96e194861b964c)
<img alt="results" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/compressed-llm/st_llm-cpt_results.png" width="100%">
</details>
#### Experimental Setup for CPT of Pruned Vicuna-7B
* Dataset: [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B)
* Training using 8 NVIDIA H100 GPUs.
* 5.5B parameters: 37B training tokens (for 6 days)
* 3.7B parameters: 74B tokens (for 8 days)
* 2.7B parameters: 150B tokens (for 12 days)
* 1.5B parameters: 271B tokens (for 11 days)
* AdamW optimizer with (β1, β2)=(0.9, 0.95); a learning rate of 0.0001; a weight decay of 0.1.
* Global batch size: 512 (micro-batch size of 2 × 32 gradient accumulation steps × 8 GPUs).
<details>
<summary>
Click to see the learning curve:
</summary>
**Zero-shot performance over the course of training for models from Vicuna-7B-v1.3 at different pruning ratios.** For each model size, the CPT duration was limited to a two-week period, but additional training could further improve the quality.
<img alt="results" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/compressed-llm/st_llm-cpt_learning-curve.png" width="100%">
</details>
#### Experimental Setup for LoRA Instruction Tuning
* Dataset: [Refined Alpaca](https://huggingface.co/datasets/yahma/alpaca-cleaned)
* Training using 1 NVIDIA A100 GPU.
* The retraining costs are low, with the entire process being executed on a single GPU.
* For example, LoRA retraining of a 20%-pruned model from 7B parameters requires about 2 hours and 22GB VRAM.
* A LoRA rank of 8; AdamW optimizer with a learning rate of 0.0001.
* A batch size of 64 over 2 epochs.
## Models from Moderate Pruning & LoRA Retraining (arXiv-v1):
| Source<br>Model | Pruning<br>Ratio | Pruning<br>Criterion | HF Models<br>Link |
|:---:|:---:|:---:|:---:|
| LLaMA-1-7B | 20% | PPL | [nota-ai/st-llama-1-5.5b-ppl](https://huggingface.co/nota-ai/st-llama-1-5.5b-ppl) |
| LLaMA-1-7B | 20% | Taylor+ | [nota-ai/st-llama-1-5.5b-taylor](https://huggingface.co/nota-ai/st-llama-1-5.5b-taylor) |
| Vicuna-v1.3-7B | 20% | PPL | [nota-ai/st-vicuna-v1.3-5.5b-ppl](https://huggingface.co/nota-ai/st-vicuna-v1.3-5.5b-ppl) |
| Vicuna-v1.3-7B | 20% | Taylor+ | [nota-ai/st-vicuna-v1.3-5.5b-taylor](https://huggingface.co/nota-ai/st-vicuna-v1.3-5.5b-taylor) |
| Vicuna-v1.3-13B | 21% | PPL | [nota-ai/st-vicuna-v1.3-10.5b-ppl](https://huggingface.co/nota-ai/st-vicuna-v1.3-10.5b-ppl) |
| Vicuna-v1.3-13B | 21% | Taylor+ | [nota-ai/st-vicuna-v1.3-10.5b-taylor](https://huggingface.co/nota-ai/st-vicuna-v1.3-10.5b-taylor) |
<details>
<summary>
Click to see the results:
</summary>
- EleutherAI/lm-evaluation-harness version [3326c54](https://github.com/EleutherAI/lm-evaluation-harness/tree/3326c547a733d598b4377e54be96e194861b964c)
<img alt="results" img src="https://netspresso-research-code-release.s3.us-east-2.amazonaws.com/compressed-llm/st-llama_zero-shot_scores.png" width="100%">
</details>
## License
- All rights related to this repository and the compressed models are reserved by Nota Inc.
- The intended use is strictly limited to research and non-commercial projects.
## Acknowledgments
- [Microsoft for Startups Founders Hub](https://www.microsoft.com/en-us/startups) and [Gwangju AICA](http://www.aica-gj.kr/main.php) for generously providing GPU resources.
- [LLM-Pruner](https://github.com/horseee/LLM-Pruner), which utilizes [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness), [PEFT](https://github.com/huggingface/peft), and [Alpaca-LoRA](https://github.com/tloen/alpaca-lora). Thanks for the pioneering work on structured pruning of LLMs!
- [LLaMA](https://github.com/facebookresearch/llama), [Vicuna](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md), [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B), and [Alpaca-Cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned). Thanks for the open-source LLMs and data!
## Citation
```bibtex
@article{kim2024shortened,
title={Shortened LLaMA: Depth Pruning for Large Language Models with Comparison of Retraining Methods},
author={Kim, Bo-Kyeong and Kim, Geonmin and Kim, Tae-Ho and Castells, Thibault and Choi, Shinkook and Shin, Junho and Song, Hyoung-Kyu},
journal={arXiv preprint arXiv:2402.02834},
year={2024},
url={https://arxiv.org/abs/2402.02834}
}
```
```bibtex
@article{kim2024mefomo,
title={Shortened LLaMA: A Simple Depth Pruning for Large Language Models},
author={Kim, Bo-Kyeong and Kim, Geonmin and Kim, Tae-Ho and Castells, Thibault and Choi, Shinkook and Shin, Junho and Song, Hyoung-Kyu},
journal={ICLR Workshop on Mathematical and Empirical Understanding of Foundation Models (ME-FoMo)},
year={2024},
url={https://openreview.net/forum?id=18VGxuOdpu}
}
``` |
migtissera/Tess-XS-v1.2 | migtissera | "2023-11-25T18:15:20Z" | 1,474 | 2 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-23T21:49:34Z" | ---
license: apache-2.0
---
# Note:
This version is experimental and have been depracated. Please use the stable release Tess-XS-v1.3-yarn-128K: https://huggingface.co/migtissera/Tess-XS-v1-3-yarn-128K
# Tess

Tess, short for Tessoro/Tessoso, is a general purpose Large Language Model series. Tess-XS-v1.1 was trained on the Mistral-7B base.
# Prompt Format:
```
SYSTEM: <ANY SYSTEM CONTEXT>
USER:
ASSISTANT:
```
|
asenella/mhd128_JNFDcca_beta_25_scale_True_seed_0 | asenella | "2023-08-17T00:23:53Z" | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | "2023-08-17T00:23:33Z" | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
cleanrl/Kangaroo-v5-sebulba_ppo_envpool-seed1 | cleanrl | "2023-02-05T18:24:43Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Kangaroo-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-05T18:24:42Z" | ---
tags:
- Kangaroo-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Kangaroo-v5
type: Kangaroo-v5
metrics:
- type: mean_reward
value: 1800.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Kangaroo-v5**
This is a trained model of a PPO agent playing Kangaroo-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/sebulba_ppo_envpool.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name sebulba_ppo_envpool --env-id Kangaroo-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-sebulba_ppo_envpool-seed1/raw/main/sebulba_ppo_envpool.py
curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-sebulba_ppo_envpool-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Kangaroo-v5-sebulba_ppo_envpool-seed1/raw/main/poetry.lock
poetry install --all-extras
python sebulba_ppo_envpool.py --actor-device-ids 0 --learner-device-ids 1 2 3 4 --params-queue-timeout 0.02 --track --save-model --upload-model --hf-entity cleanrl --env-id Kangaroo-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'anneal_lr': True,
'async_batch_size': 16,
'async_update': 4,
'batch_size': 8192,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Kangaroo-v5',
'exp_name': 'sebulba_ppo_envpool',
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3, 4],
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 2048,
'norm_adv': True,
'num_actor_threads': 1,
'num_envs': 64,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 6103,
'params_queue_timeout': 0.02,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
mradermacher/L3-78b-Large-v1-i1-GGUF | mradermacher | "2025-03-27T08:54:21Z" | 236 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:FINGU-AI/L3-78b-Large-v1",
"base_model:quantized:FINGU-AI/L3-78b-Large-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-25T22:37:06Z" | ---
base_model: FINGU-AI/L3-78b-Large-v1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/FINGU-AI/L3-78b-Large-v1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3-78b-Large-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 24.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 25.5 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 27.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 29.1 | |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 30.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 31.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 31.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q2_K.gguf) | i1-Q2_K | 31.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 34.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 35.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 36.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 37.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 40.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 42.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 42.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q4_0.gguf) | i1-Q4_0 | 44.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 47.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q4_1.gguf) | i1-Q4_1 | 49.1 | |
| [PART 1](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q4_K_M.gguf.part2of2) | i1-Q4_K_M | 50.8 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 55.2 | |
| [PART 1](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 58.4 | |
| [PART 1](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/L3-78b-Large-v1-i1-GGUF/resolve/main/L3-78b-Large-v1.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 69.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
rhyliieee/LLAMA3-MED-v2.2 | rhyliieee | "2024-11-02T21:18:37Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:rhyliieee/notes-completion-set",
"base_model:aaditya/Llama3-OpenBioLLM-8B",
"base_model:quantized:aaditya/Llama3-OpenBioLLM-8B",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-11-02T20:55:27Z" | ---
license: mit
datasets:
- rhyliieee/notes-completion-set
base_model:
- aaditya/Llama3-OpenBioLLM-8B
pipeline_tag: text-generation
library_name: transformers
---
Finetuned a pretrained Model with Lora, resize the base model's embeddings, then load Peft Model with the resized base model.
"""
# add special tokens to the tokenizer and base model before merging peft with base
open_tokenizer.add_special_tokens({
"additional_special_tokens": ["<|start_header_id|>", "<|end_header_id|>", "<|eot_id|>"]
})
base_model.resize_token_embeddings(len(open_tokenizer))
# reload the peft model with resized token embedding of base model
peft_model = PeftModel.from_pretrained(base_model, "rhyliieee/LLaMA3-8Bit-Lora-Med-v2",)
# perform merging
merged_peft_base_with_special_tokens = peft_model.merge_and_unload()
""" |
Jonjew/PastelGothStyle | Jonjew | "2025-02-09T07:05:05Z" | 7 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | "2025-02-09T07:05:00Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
On a distant, surreal planet, an alien couture collection is displayed in an
otherworldly marketplace. The setting features glowing, crystalline
structures and bioluminescent flora, creating a fantastical, ethereal
environment. Models showcase intricate, alien garments crafted from
translucent, luminous materials with delicate, pulsating patterns. The
attire blends organic, flowing shapes with futuristic, metallic accents,
reflecting a harmonious fusion of natural and advanced aesthetics. The
vibrant colors and unique silhouettes highlight the avant-garde nature of
extraterrestrial fashion in hud_pstl_gth_styl,
output:
url: images/MarkuryFLUX_01099_.png
- text: >-
On a distant, surreal planet, an alien couture collection is displayed in an
otherworldly marketplace. The setting features glowing, crystalline
structures and bioluminescent flora, creating a fantastical, ethereal
environment. Models showcase intricate, alien garments crafted from
translucent, luminous materials with delicate, pulsating patterns. The
attire blends organic, flowing shapes with futuristic, metallic accents,
reflecting a harmonious fusion of natural and advanced aesthetics. The
vibrant colors and unique silhouettes highlight the avant-garde nature of
extraterrestrial fashion in hud_pstl_gth_styl,
output:
url: images/MarkuryFLUX_01097_.png
- text: >-
Energetic Harajuku shopping street filled with cheerful, bubbly fashionistas
showcasing their unique, eclectic styles. The street is lined with quirky,
pastel-colored shops and stalls offering an array of colorful, whimsical
clothing and accessories. The shoppers are dressed in charming, layered
outfits with playful patterns, oversized bows, and bright, cheerful colors.
The scene includes colorful murals, twinkling fairy lights, and festive
decorations, creating a lively, vibrant atmosphere of playful fashion and
youthful exuberancein hud_pstl_gth_styl, pastel, goth, psychedelic
output:
url: images/MarkuryFLUX_01113_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: unknown
---
# Pastel Goth Style FLUX
<Gallery />
## Model description
FROM: https://civitai.com/models/682692/pastel-goth-style-flux
Triggers: hud_pstl_gth_styl, pastel, gothic, dark, fantasy
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/PastelGothStyle/tree/main) them in the Files & versions tab.
|
AdapterHub/bert-base-multilingual-cased-sw-wiki_pfeiffer | AdapterHub | "2024-05-05T21:05:01Z" | 3 | 0 | adapter-transformers | [
"adapter-transformers",
"bert",
"adapterhub:sw/wiki",
"sw",
"arxiv:2005.00052",
"license:apache-2.0",
"region:us"
] | null | "2024-05-05T21:04:58Z" | ---
tags:
- bert
- adapter-transformers
- adapterhub:sw/wiki
language:
- sw
license: "apache-2.0"
---
# Adapter `bert-base-multilingual-cased-sw-wiki_pfeiffer` for bert-base-multilingual-cased
Pfeiffer Adapter trained with Masked Language Modelling on Swahili Wikipedia Articles for 100k steps and a batch size of 64.
**This adapter was created for usage with the [Adapters](https://github.com/Adapter-Hub/adapters) library.**
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("bert-base-multilingual-cased")
adapter_name = model.load_adapter("AdapterHub/bert-base-multilingual-cased-sw-wiki_pfeiffer")
model.set_active_adapters(adapter_name)
```
## Architecture & Training
- Adapter architecture: pfeiffer
- Prediction head: None
- Dataset: [sw/wiki](https://adapterhub.ml/explore/sw/wiki/)
## Author Information
- Author name(s): Jonas Pfeiffer
- Author email: [email protected]
- Author links: [Website](https://pfeiffer.ai), [GitHub](https://github.com/jopfeiff), [Twitter](https://twitter.com/@PfeiffJo)
## Versions
- `nd` **(main)**
- `wd`
## Citation
```bibtex
@article{pfeiffer20madx,
title={{MAD-X}: An {A}dapter-based {F}ramework for {M}ulti-task {C}ross-lingual {T}ransfer},
author={Pfeiffer, Jonas and Vuli\'{c}, Ivan and Gurevych, Iryna and Ruder, Sebastian},
journal={arXiv preprint},
year={2020},
url={https://arxiv.org/pdf/2005.00052.pdf},
}
```
*This adapter has been auto-imported from https://github.com/Adapter-Hub/Hub/blob/master/adapters/ukp/bert-base-multilingual-cased-sw-wiki_pfeiffer.yaml*. |
Bronsn/llama3.2-1B-translatev1 | Bronsn | "2024-09-29T21:30:18Z" | 57 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-29T18:28:57Z" | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
memeviss/IQ_1 | memeviss | "2025-04-17T18:31:31Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2025-04-17T18:26:30Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Narkantak/mistral-7b-Intent-Classifier-Ashu | Narkantak | "2024-04-02T09:11:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-02T09:11:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
marisabatalla/autotrain-4lx4k-qa52n | marisabatalla | "2024-02-29T18:27:03Z" | 192 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"dataset:autotrain-4lx4k-qa52n/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-02-29T18:26:46Z" |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- autotrain-4lx4k-qa52n/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: 0.09857235848903656
f1: 1.0
precision: 1.0
recall: 1.0
auc: 1.0
accuracy: 1.0
|
kaierlong/gemma-chinese | kaierlong | "2024-04-15T07:45:17Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | "2024-04-15T03:47:59Z" | ---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- generator
model-index:
- name: gemma-chinese
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-chinese
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.38.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 |
hlumin/speecht5_finetuned_voxpopuli_nl | hlumin | "2023-08-30T23:31:37Z" | 82 | 0 | transformers | [
"transformers",
"pytorch",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"text-to-speech",
"lt",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2023-08-30T23:25:39Z" | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
- text-to-speech
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
language:
- lt
pipeline_tag: text-to-speech
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6484
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.52 | 5 | 0.6706 |
| No log | 1.04 | 10 | 0.6484 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3 |
CocoRoF/KoModernBERT-large-mlm-v17 | CocoRoF | "2025-03-24T04:45:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"generated_from_trainer",
"base_model:CocoRoF/KoModernBERT-large-mlm-v16",
"base_model:finetune:CocoRoF/KoModernBERT-large-mlm-v16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2025-03-23T10:01:20Z" | ---
library_name: transformers
license: apache-2.0
base_model: CocoRoF/KoModernBERT-large-mlm-v16
tags:
- generated_from_trainer
model-index:
- name: KoModernBERT-large-mlm-v17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KoModernBERT-large-mlm-v17
This model is a fine-tuned version of [CocoRoF/KoModernBERT-large-mlm-v16](https://huggingface.co/CocoRoF/KoModernBERT-large-mlm-v16) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 32
- total_train_batch_size: 1024
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.98) and epsilon=1e-07 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 90.5064 | 0.1156 | 500 | nan |
| 86.5566 | 0.2312 | 1000 | nan |
| 83.8022 | 0.3468 | 1500 | nan |
| 83.7241 | 0.4624 | 2000 | nan |
| 82.7234 | 0.5779 | 2500 | nan |
| 79.0157 | 0.6935 | 3000 | nan |
| 77.4688 | 0.8091 | 3500 | nan |
| 74.9247 | 0.9247 | 4000 | nan |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.1
|
sd-concepts-library/singsing-doll | sd-concepts-library | "2022-09-19T16:14:12Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2022-09-19T16:14:06Z" | ---
license: mit
---
### Singsing doll on Stable Diffusion
This is the `<singsing>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
memorysaver/q-FrozenLake-v1-4x4-noSlippery | memorysaver | "2022-05-29T11:08:08Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2022-05-29T11:07:58Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
devmoon732/mayen | devmoon732 | "2025-04-06T10:03:28Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-04-06T09:32:47Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: mayen
---
# Mayen
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `mayen` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "mayen",
"lora_weights": "https://huggingface.co/devmoon732/mayen/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('devmoon732/mayen', weight_name='lora.safetensors')
image = pipeline('mayen').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/devmoon732/mayen/discussions) to add images that show off what you’ve made with this LoRA.
|
Grayx/fiufiu_476 | Grayx | "2025-01-16T22:21:18Z" | 44 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-16T22:18:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isspek/bert-base-cased_monkeypox_top3_2_2e-5_16_undersampling_0.4 | isspek | "2025-03-23T13:27:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-23T13:27:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Ekhlass/phi2-flutter-questions | Ekhlass | "2024-04-08T08:38:47Z" | 48 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-08T08:36:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
e1113633/roomifai_sd15_ft_diningroom | e1113633 | "2023-11-02T10:03:30Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-11-02T09:36:37Z" |
# Roomifai Dining Room Stable Diffusion v1-5 Fine Tuning Model Card
This model is a fine tuned model from stable diffusion 1.5 for our school project, it is capable of generating dining room design given specific dining room promopts.
# Uses
You have to install all the dependencies, python 3.10.x
There is no GUI for testing the mode. the test.py script is used for testing the model, the command is as such:
> python test.py <model> <output folder> <prompts file> <prefix for output file>
for e.g
> python test.py "./unet/diffusion_pytorch_model.safetensors" "output" "dingingroom_prompt.txt" "test1"
|
The-Masters-Golf-Reddit/LIVE | The-Masters-Golf-Reddit | "2025-04-13T19:02:49Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-13T18:56:19Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
xshubhamx/InLegal-legal-merge-ties-d-0-InLegal-w-1 | xshubhamx | "2024-04-20T17:16:32Z" | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"merge",
"mergekit",
"lazymergekit",
"xshubhamx/InLegalBERT",
"xshubhamx/legal-bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-20T17:14:08Z" | ---
license: apache-2.0
tags:
- merge
- mergekit
- lazymergekit
- xshubhamx/InLegalBERT
- xshubhamx/legal-bert-base-uncased
---
## Metrics
- loss: 0.9470
- accuracy: 0.8366
- precision: 0.8360
- recall: 0.8366
- precision_macro: 0.8141
- recall_macro: 0.7899
- macro_fpr: 0.0143
- weighted_fpr: 0.0138
- weighted_specificity: 0.9781
- macro_specificity: 0.9876
- weighted_sensitivity: 0.8366
- macro_sensitivity: 0.7899
- f1_micro: 0.8366
- f1_macro: 0.7978
- f1_weighted: 0.8350
- runtime: 21.6449
- samples_per_second: 59.6450
- steps_per_second: 7.4840
# InLegal-legal-merge-ties-d-0-InLegal-w-1
InLegal-legal-merge-ties-d-0-InLegal-w-1 is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [xshubhamx/InLegalBERT](https://huggingface.co/xshubhamx/InLegalBERT)
* [xshubhamx/legal-bert-base-uncased](https://huggingface.co/xshubhamx/legal-bert-base-uncased)
## 🧩 Configuration
```yaml
models:
- model: xshubhamx/InLegalBERT
parameters:
density: 0.53
weight: 0
- model: xshubhamx/legal-bert-base-uncased
parameters:
density: 0.53
weight: 1
merge_method: ties
base_model: xshubhamx/InLegalBERT
parameters:
normalize: false
int8_mask: true
dtype: float16
``` |
weijie210/zephyr-7b-UFB-0 | weijie210 | "2024-02-07T03:49:39Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"trl",
"dpo",
"generated_from_trainer",
"conversational",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-07T01:25:02Z" | ---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: zephyr-7b-UFB-0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-UFB-0
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1492
- Rewards/chosen: -1.5452
- Rewards/rejected: -7.2115
- Rewards/accuracies: 0.8359
- Rewards/margins: 5.6663
- Logps/rejected: -171.0846
- Logps/chosen: -143.6666
- Logits/rejected: -2.3237
- Logits/chosen: -2.3692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.1
- Pytorch 2.0.1+cu117
- Datasets 2.16.1
- Tokenizers 0.15.0
|
AdapterHub/xmod-base-be_BY | AdapterHub | "2023-08-31T12:59:06Z" | 4 | 0 | adapter-transformers | [
"adapter-transformers",
"xmod",
"adapterhub:be/cc100",
"be",
"license:mit",
"region:us"
] | null | "2023-08-31T12:47:02Z" | ---
tags:
- adapter-transformers
- xmod
- adapterhub:be/cc100
language:
- be
license: "mit"
---
# Adapter `AdapterHub/xmod-base-be_BY` for AdapterHub/xmod-base
An [adapter](https://adapterhub.ml) for the `AdapterHub/xmod-base` model that was trained on the [be/cc100](https://adapterhub.ml/explore/be/cc100/) dataset.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("AdapterHub/xmod-base")
adapter_name = model.load_adapter("AdapterHub/xmod-base-be_BY", source="hf", set_active=True)
```
## Architecture & Training
This adapter was extracted from the original model checkpoint [facebook/xmod-base](https://huggingface.co/facebook/xmod-base) to allow loading it independently via the Adapters library.
For more information on architecture and training, please refer to the original model card.
## Evaluation results
<!-- Add some description here -->
## Citation
[Lifting the Curse of Multilinguality by Pre-training Modular Transformers (Pfeiffer et al., 2022)](http://dx.doi.org/10.18653/v1/2022.naacl-main.255)
```
@inproceedings{pfeiffer-etal-2022-lifting,
title = "Lifting the Curse of Multilinguality by Pre-training Modular Transformers",
author = "Pfeiffer, Jonas and
Goyal, Naman and
Lin, Xi and
Li, Xian and
Cross, James and
Riedel, Sebastian and
Artetxe, Mikel",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.255",
doi = "10.18653/v1/2022.naacl-main.255",
pages = "3479--3495"
}
``` |
dar5654/segformer-b0-scene-parse-150-MASKED | dar5654 | "2023-04-29T20:03:23Z" | 31 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2023-04-29T15:34:13Z" | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-scene-parse-150-MASKED
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150-MASKED
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1526
- Mean Iou: 0.0217
- Mean Accuracy: 0.0580
- Overall Accuracy: 0.2746
- Per Category Iou: [0.2638779780993535, 0.24032657224553952, 0.28498201974847515, 0.1075812162299665, 0.14745268628426467, 0.048342869965219346, 0.0, 0.007290688724806103, 0.04780558672261605, 0.06559620777139805, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]
- Per Category Accuracy: [0.5551073389128427, 0.47540841261768607, 0.4280130098767642, 0.6449145007547091, 0.4263212952616438, 0.051559171951657295, 0.0, 0.008099600657740192, 0.06573971674217831, 0.0695452132365953, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| 4.7792 | 1.0 | 20 | 4.7294 | 0.0082 | 0.0454 | 0.1893 | [0.2263585397369742, 0.13770136142176356, 0.08295638586455376, 0.08510788870213735, 0.12573291455024074, 0.02435003944847278, 0.0, 0.004065480375718896, 0.0017733053903393038, 0.09547671544063606, 0.0, 0.0, 0.00046794942973620344, 0.0, 0.0, 0.0, 0.0, 0.0003653809493550232, 0.0, 0.0, nan, 0.0, 0.0, 0.008303859757035214, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0] | [0.4716825785763388, 0.2136232639104242, 0.09227762360874885, 0.6465273039306908, 0.5643826822947624, 0.024817518248175182, 0.0, 0.0042377260981912145, 0.0018640077057543434, 0.10115023889577066, 0.0, 0.0, 0.0004903142166191589, nan, 0.0, 0.0, nan, 0.001218026796589525, 0.0, 0.0, nan, nan, 0.0, 0.010582425335110135, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.6816 | 2.0 | 40 | 4.3777 | 0.0172 | 0.0508 | 0.2436 | [0.2348784161662965, 0.1780159659740713, 0.1725553209314372, 0.11519214696920146, 0.1519642591474354, 0.05501165920088421, 0.008008356545961003, 0.003268125562869637, 0.06320147075839194, 0.033278833708018256, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.49203944919252063, 0.5863196761642498, 0.24353236057649272, 0.45883216508487895, 0.4128408739687597, 0.05860476247457221, 0.010855884203901826, 0.0033074935400516795, 0.08768863044486462, 0.03457795080516723, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.3571 | 3.0 | 60 | 4.2442 | 0.0166 | 0.0524 | 0.2571 | [0.25585106151712383, 0.22004710007836228, 0.22139639459642338, 0.10209920082512318, 0.1575995748489595, 0.017118189937481394, 0.0, 0.007236489870641267, 0.03938333712881877, 0.008957958671236131, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5795454321624768, 0.44004779802440347, 0.3099841852415481, 0.6467961044600211, 0.40188060198283443, 0.017613976307287303, 0.0, 0.00787408973455485, 0.05089900467339731, 0.009343479030260131, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 3.9658 | 4.0 | 80 | 4.1981 | 0.0207 | 0.0555 | 0.2731 | [0.26863872743436906, 0.26573623577278954, 0.2321627542307547, 0.10446031518997217, 0.16009038296656186, 0.046391399460182774, 0.0, 0.004261499526016889, 0.04043589899112432, 0.01742889012827663, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.6207791747342379, 0.47583015989183425, 0.2976531495240653, 0.6644438930587433, 0.4261667578041416, 0.04862031829603925, 0.0, 0.0046041813483673946, 0.05351218294031608, 0.01769598301185631, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
| 4.0603 | 5.0 | 100 | 4.1526 | 0.0217 | 0.0580 | 0.2746 | [0.2638779780993535, 0.24032657224553952, 0.28498201974847515, 0.1075812162299665, 0.14745268628426467, 0.048342869965219346, 0.0, 0.007290688724806103, 0.04780558672261605, 0.06559620777139805, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, 0.0, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] | [0.5551073389128427, 0.47540841261768607, 0.4280130098767642, 0.6449145007547091, 0.4263212952616438, 0.051559171951657295, 0.0, 0.008099600657740192, 0.06573971674217831, 0.0695452132365953, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan] |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ctoxz/GPU | ctoxz | "2025-04-18T01:02:17Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-18T01:02:07Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
albertus-sussex/veriscrape-fixed-simcse-nbaplayer-reference_1_to_verify_9-fold-8 | albertus-sussex | "2025-04-04T14:29:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-04-04T14:29:21Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Jadsalameh31/finetuning-sentiment-model-3000-samples | Jadsalameh31 | "2023-03-06T18:49:57Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-06T18:46:13Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7012
- eval_accuracy: 0.4933
- eval_f1: 0.6607
- eval_runtime: 7.7289
- eval_samples_per_second: 38.816
- eval_steps_per_second: 2.458
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.1
- Tokenizers 0.13.2
|
klemiec/unit0 | klemiec | "2023-03-02T19:48:34Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-02T19:43:25Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 269.18 +/- 18.94
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
second-state/phi-4-GGUF | second-state | "2025-01-11T13:58:41Z" | 669 | 0 | transformers | [
"transformers",
"gguf",
"phi3",
"text-generation",
"custom_code",
"en",
"base_model:microsoft/phi-4",
"base_model:quantized:microsoft/phi-4",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-01-09T02:17:27Z" | ---
base_model: microsoft/phi-4
license: mit
license_link: https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
library_name: transformers
model_creator: Microsoft
model_name: phi-4
quantized_by: Second State Inc.
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Phi-4-GGUF
## Original Model
[microsoft/phi-4](https://huggingface.co/microsoft/phi-4)
## Run with LlamaEdge
- LlamaEdge version: [v0.16.0](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.16.0) and above
- Prompt template
- Prompt type: `phi-4-chat`
- Prompt string
```text
<|im_start|>system<|im_sep|>
{system_message}<|im_end|>
<|im_start|>user<|im_sep|>
{user_message}<|im_end|>
<|im_start|>assistant<|im_sep|>
```
- Context size: `16000`
- Run as LlamaEdge service
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:phi-4-Q5_K_M.gguf \
llama-api-server.wasm \
--prompt-template phi-4-chat \
--ctx-size 16000 \
--model-name phi-3-mini
```
- Run as LlamaEdge command app
```bash
wasmedge --dir .:. --nn-preload default:GGML:AUTO:phi-4-Q5_K_M.gguf \
llama-chat.wasm \
--prompt-template phi-4-chat \
--ctx-size 16000
```
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [phi-4-Q2_K.gguf](https://huggingface.co/second-state/phi-4-GGUF/blob/main/phi-4-Q2_K.gguf) | Q2_K | 2 | 5.55 GB| smallest, significant quality loss - not recommended for most purposes |
| [phi-4-Q3_K_L.gguf](https://huggingface.co/second-state/phi-4-GGUF/blob/main/phi-4-Q3_K_L.gguf) | Q3_K_L | 3 | 7.93 GB| small, substantial quality loss |
| [phi-4-Q3_K_M.gguf](https://huggingface.co/second-state/phi-4-GGUF/blob/main/phi-4-Q3_K_M.gguf) | Q3_K_M | 3 | 7.36 GB| very small, high quality loss |
| [phi-4-Q3_K_S.gguf](https://huggingface.co/second-state/phi-4-GGUF/blob/main/phi-4-Q3_K_S.gguf) | Q3_K_S | 3 | 6.50 GB| very small, high quality loss |
| [phi-4-Q4_0.gguf](https://huggingface.co/second-state/phi-4-GGUF/blob/main/phi-4-Q4_0.gguf) | Q4_0 | 4 | 8.38 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [phi-4-Q4_K_M.gguf](https://huggingface.co/second-state/phi-4-GGUF/blob/main/phi-4-Q4_K_M.gguf) | Q4_K_M | 4 | 9.05 GB| medium, balanced quality - recommended |
| [phi-4-Q4_K_S.gguf](https://huggingface.co/second-state/phi-4-GGUF/blob/main/phi-4-Q4_K_S.gguf) | Q4_K_S | 4 | 8.44 GB| small, greater quality loss |
| [phi-4-Q5_0.gguf](https://huggingface.co/second-state/phi-4-GGUF/blob/main/phi-4-Q5_0.gguf) | Q5_0 | 5 | 10.2 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [phi-4-Q5_K_M.gguf](https://huggingface.co/second-state/phi-4-GGUF/blob/main/phi-4-Q5_K_M.gguf) | Q5_K_M | 5 | 10.6 GB| large, very low quality loss - recommended |
| [phi-4-Q5_K_S.gguf](https://huggingface.co/second-state/phi-4-GGUF/blob/main/phi-4-Q5_K_S.gguf) | Q5_K_S | 5 | 10.2 GB| large, low quality loss - recommended |
| [phi-4-Q6_K.gguf](https://huggingface.co/second-state/phi-4-GGUF/blob/main/phi-4-Q6_K.gguf) | Q6_K | 6 | 12.0 GB| very large, extremely low quality loss |
| [phi-4-Q8_0.gguf](https://huggingface.co/second-state/phi-4-GGUF/blob/main/phi-4-Q8_0.gguf) | Q8_0 | 8 | 15.6 GB| very large, extremely low quality loss - not recommended |
| [phi-4-f16.gguf](https://huggingface.co/second-state/phi-4-GGUF/blob/main/phi-4-f16.gguf) | f16 | 16 | 29.3 GB| |
*Quantized with llama.cpp b4450.* |
BEASTBOYJAY/my-fine-tuned-summarizer | BEASTBOYJAY | "2024-11-16T05:53:44Z" | 103 | 0 | transformers | [
"transformers",
"safetensors",
"encoder-decoder",
"text2text-generation",
"en",
"dataset:ccdv/cnn_dailymail",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-11-16T05:36:03Z" | ---
library_name: transformers
datasets:
- ccdv/cnn_dailymail
language:
- en
base_model:
- google-bert/bert-base-uncased
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model is used for making or generating summary of the provided paragraph.
- **Developed by:** BEASTBOYJAY
- **Model type:** Transformer(encoder)
- **Language(s) (NLP):** English
- **Finetuned from model:** Bert-base-uncased
## Uses
- For the summarization purpose only
## Bias, Risks, and Limitations
This model is fine-tuned on very small dataset can need more fine-tuning for better results.(Fine-tuned this model only for eductional purposes)
## How to Get Started with the Model
Use the code below to get started with the model.
```
from transformers import EncoderDecoderModel, BertTokenizer
class TextSummarizer:
def __init__(self, model_path, tokenizer_name="bert-base-uncased"):
self.tokenizer = BertTokenizer.from_pretrained(tokenizer_name)
self.model = EncoderDecoderModel.from_pretrained(model_path)
def summarize(self, text, max_input_length=512):
inputs = self.tokenizer(
text,
return_tensors="pt",
truncation=True,
padding="max_length",
max_length=max_input_length,
)
summary_ids = self.model.generate(
inputs["input_ids"],
attention_mask=inputs["attention_mask"],
decoder_start_token_id=self.tokenizer.cls_token_id,
max_length=128,
num_beams=4,
length_penalty=1.5,
no_repeat_ngram_size=1,
early_stopping=True,
)
summary = self.tokenizer.decode(summary_ids[0], skip_special_tokens=True)
return summary
if __name__ == "__main__":
summarizer = TextSummarizer(model_path="BEASTBOYJAY/my-fine-tuned-summarizer")
test_article = "Your article or paragraph"
summary = summarizer.summarize(test_article)
print("Generated Summary:", summary)
``` |
Ppoyaa/L3-Inca-8B-v0.8 | Ppoyaa | "2024-06-25T03:16:46Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Nitral-AI/Hathor_Stable-v0.2-L3-8B",
"base_model:merge:Nitral-AI/Hathor_Stable-v0.2-L3-8B",
"base_model:Sao10K/L3-8B-Stheno-v3.2",
"base_model:merge:Sao10K/L3-8B-Stheno-v3.2",
"base_model:grimjim/Llama-3-Luminurse-v0.2-OAS-8B",
"base_model:merge:grimjim/Llama-3-Luminurse-v0.2-OAS-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-22T15:15:22Z" | ---
base_model:
- NurtureAI/Meta-Llama-3-8B-Instruct-32k
- Nitral-AI/Hathor-L3-8B-v.02
- grimjim/Llama-3-Luminurse-v0.2-OAS-8B
- Sao10K/L3-8B-Stheno-v3.2
library_name: transformers
tags:
- mergekit
- merge
---

***
### L3-Inca-8B-v0.8
[L3-Inca-8B-v0.8](https://huggingface.co/Ppoyaa/L3-Inca-8B-v0.8) is a merge of the following models:
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
* [Nitral-AI/Hathor-L3-8B-v.02](https://huggingface.co/Nitral-AI/Hathor-L3-8B-v.02)
* [grimjim/Llama-3-Luminurse-v0.2-OAS-8B](https://huggingface.co/grimjim/Llama-3-Luminurse-v0.2-OAS-8B)
using [NurtureAI/Meta-Llama-3-8B-Instruct-32k](https://huggingface.co/NurtureAI/Meta-Llama-3-8B-Instruct-32k) as the base.
>[!IMPORTANT]
> UPDATE:
> Changed the merging method from **model_stock** to **ties** and made Stheno have the most weight and density.
***
### Quantized Models by [mradermacher](https://huggingface.co/mradermacher)
• Static
[L3-Inca-8B-v0.8-GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-GGUF)
• Imatrix
[L3-Inca-8B-v0.8-i1-GGUF](https://huggingface.co/mradermacher/L3-Inca-8B-v0.8-i1-GGUF)
***
### Configuration
```yaml
models:
- model: Sao10K/L3-8B-Stheno-v3.2
parameters:
density: 0.85
weight: 0.5
- model: Nitral-AI/Hathor-L3-8B-v.02
parameters:
density: 0.75
weight: 0.3
- model: grimjim/Llama-3-Luminurse-v0.2-OAS-8B
parameters:
density: 0.75
weight: 0.2
merge_method: ties
base_model: NurtureAI/Meta-Llama-3-8B-Instruct-32k
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
```
|
abhijeet007/t5-Large_FineTunned | abhijeet007 | "2024-03-27T07:57:09Z" | 63 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-03-27T07:55:48Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
apparaomulpuri/alpaca-custom-model | apparaomulpuri | "2023-06-26T13:54:37Z" | 5 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-06-26T10:53:07Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
isspek/roberta-base_covid_chatgpt_3_2e-5_16_undersampling_0.2 | isspek | "2025-01-01T16:20:24Z" | 200 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-28T23:28:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jysssacc/bloomz-1b1_adalora_627_lr5e-05_bs4_epoch5_wd0.01 | jysssacc | "2024-01-13T22:30:46Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:bigscience/bloomz-1b1",
"base_model:adapter:bigscience/bloomz-1b1",
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | "2024-01-13T19:41:57Z" | ---
license: bigscience-bloom-rail-1.0
library_name: peft
tags:
- generated_from_trainer
base_model: bigscience/bloomz-1b1
model-index:
- name: bloomz-1b1_adalora_627_lr5e-05_bs4_epoch5_wd0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloomz-1b1_adalora_627_lr5e-05_bs4_epoch5_wd0.01
This model is a fine-tuned version of [bigscience/bloomz-1b1](https://huggingface.co/bigscience/bloomz-1b1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.0291 | 1.0 | 157 | 4.5495 |
| 4.582 | 2.0 | 314 | 4.0083 |
| 4.1412 | 3.0 | 471 | 3.5810 |
| 3.6919 | 4.0 | 628 | 3.3076 |
| 3.5125 | 5.0 | 785 | 3.2440 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.0.1
- Datasets 2.16.1
- Tokenizers 0.15.0 |
yesj1234/xlsr_cycle0_ko | yesj1234 | "2023-09-07T04:20:40Z" | 77 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"./sample_speech.py",
"generated_from_trainer",
"dataset:sample_speech",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-09-07T04:16:24Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- automatic-speech-recognition
- ./sample_speech.py
- generated_from_trainer
datasets:
- sample_speech
model-index:
- name: ko-xlsr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ko-xlsr
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the ./SAMPLE_SPEECH.PY - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5649
- Cer: 0.3569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.5849 | 22.22 | 1000 | 2.5846 | 0.5985 |
| 0.7224 | 44.44 | 2000 | 1.5880 | 0.3664 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
mradermacher/Pathfinder-RP-12B-RU-i1-GGUF | mradermacher | "2025-01-30T09:16:40Z" | 874 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Aleteian/Pathfinder-RP-12B-RU",
"base_model:quantized:Aleteian/Pathfinder-RP-12B-RU",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-01-30T04:31:32Z" | ---
base_model: Aleteian/Pathfinder-RP-12B-RU
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Aleteian/Pathfinder-RP-12B-RU
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Pathfinder-RP-12B-RU-i1-GGUF/resolve/main/Pathfinder-RP-12B-RU.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
MinaMila/gemma2_GermanCredit_cfda_14ep_42 | MinaMila | "2025-03-19T01:12:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-9b",
"base_model:finetune:unsloth/gemma-2-9b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-19T01:12:12Z" | ---
base_model: unsloth/gemma-2-9b
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
schoonhovenra/20240502 | schoonhovenra | "2024-05-06T00:21:15Z" | 191 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | "2024-05-06T00:21:06Z" | ---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: '20240502'
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20240502
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 400
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.12.0
- Tokenizers 0.15.1
|
mradermacher/ThoughtStream-4B-v0.2-GGUF | mradermacher | "2025-03-31T14:32:33Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:SkunkworksAI/reasoning-0.01",
"dataset:trollek/ThoughtfulAssistant-v01",
"dataset:trollek/ThoughtfulAssistant-v02",
"base_model:trollek/ThoughtStream-4B-v0.2",
"base_model:quantized:trollek/ThoughtStream-4B-v0.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-31T14:18:32Z" | ---
base_model: trollek/ThoughtStream-4B-v0.2
datasets:
- SkunkworksAI/reasoning-0.01
- trollek/ThoughtfulAssistant-v01
- trollek/ThoughtfulAssistant-v02
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/trollek/ThoughtStream-4B-v0.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ThoughtStream-4B-v0.2-GGUF/resolve/main/ThoughtStream-4B-v0.2.Q2_K.gguf) | Q2_K | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/ThoughtStream-4B-v0.2-GGUF/resolve/main/ThoughtStream-4B-v0.2.Q3_K_S.gguf) | Q3_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/ThoughtStream-4B-v0.2-GGUF/resolve/main/ThoughtStream-4B-v0.2.Q3_K_M.gguf) | Q3_K_M | 2.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ThoughtStream-4B-v0.2-GGUF/resolve/main/ThoughtStream-4B-v0.2.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/ThoughtStream-4B-v0.2-GGUF/resolve/main/ThoughtStream-4B-v0.2.IQ4_XS.gguf) | IQ4_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/ThoughtStream-4B-v0.2-GGUF/resolve/main/ThoughtStream-4B-v0.2.Q4_K_S.gguf) | Q4_K_S | 2.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ThoughtStream-4B-v0.2-GGUF/resolve/main/ThoughtStream-4B-v0.2.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ThoughtStream-4B-v0.2-GGUF/resolve/main/ThoughtStream-4B-v0.2.Q5_K_S.gguf) | Q5_K_S | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/ThoughtStream-4B-v0.2-GGUF/resolve/main/ThoughtStream-4B-v0.2.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/ThoughtStream-4B-v0.2-GGUF/resolve/main/ThoughtStream-4B-v0.2.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ThoughtStream-4B-v0.2-GGUF/resolve/main/ThoughtStream-4B-v0.2.Q8_0.gguf) | Q8_0 | 4.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ThoughtStream-4B-v0.2-GGUF/resolve/main/ThoughtStream-4B-v0.2.f16.gguf) | f16 | 8.0 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
huggingtweets/coronavid19 | huggingtweets | "2021-05-21T23:30:27Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/coronavid19/1608807621950/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1232060545626497024/ltc63x4__400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Coronavirus 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@coronavid19 bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@coronavid19's tweets](https://twitter.com/coronavid19).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>1618</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>12</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>96</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>1510</td>
</tr>
</tbody>
</table>
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lgjd18p/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @coronavid19's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ki9s94y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ki9s94y/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/coronavid19'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
|
wenge-research/yayi-7b-llama2 | wenge-research | "2023-09-13T02:25:50Z" | 1,507 | 10 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"yayi",
"zh",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-07-21T10:10:18Z" | ---
language:
- zh
- en
pipeline_tag: text-generation
tags:
- yayi
---
# 雅意大模型
## 介绍
[雅意大模型](https://www.wenge.com/yayi/index.html)在百万级人工构造的高质量领域数据上进行指令微调得到,训练数据覆盖媒体宣传、舆情分析、公共安全、金融风控、城市治理等五大领域,上百种自然语言指令任务。雅意大模型从预训练初始化权重到领域模型的迭代过程中,我们逐步增强了它的中文基础能力和领域分析能力,并增加了多轮对话和部分插件能力。同时,经过数百名用户内测过程中持续不断的人工反馈优化,我们进一步提升了模型性能和安全性。
通过雅意大模型的开源为促进中文预训练大模型开源社区的发展,贡献自己的一份力量,通过开源,与每一位合作伙伴共建雅意大模型生态。
*News: 🔥 雅意大模型已开源基于 LLaMA 2 的中文优化模型版本,探索适用于中文多领域任务的最新实践。*
## 模型地址
| 模型名称 | 🤗HF模型标识 | 下载地址 |
| --------- | --------- | --------- |
| YaYi-7B | wenge-research/yayi-7b | [模型下载](https://huggingface.co/wenge-research/yayi-7b) |
| YaYi-7B-Llama2 | wenge-research/yayi-7b-llama2 | [模型下载](https://huggingface.co/wenge-research/yayi-7b-llama2) |
| YaYi-13B-Llama2 | wenge-research/yayi-13b-llama2 | [模型下载](https://huggingface.co/wenge-research/yayi-13b-llama2) |
| YaYi-70B-Llama2 | wenge-research/yayi-70b-llama2 | [模型下载](https://huggingface.co/wenge-research/yayi-70b-llama2) |
详情请参考我们的 [💻Github Repo](https://github.com/wenge-research/YaYi)。
## 运行方式
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer, GenerationConfig
from transformers import StoppingCriteria, StoppingCriteriaList
pretrained_model_name_or_path = "wenge-research/yayi-7b-llama2"
tokenizer = LlamaTokenizer.from_pretrained(pretrained_model_name_or_path)
model = LlamaForCausalLM.from_pretrained(pretrained_model_name_or_path, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=False)
# Define the stopping criteria
class KeywordsStoppingCriteria(StoppingCriteria):
def __init__(self, keywords_ids:list):
self.keywords = keywords_ids
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
if input_ids[0][-1] in self.keywords:
return True
return False
stop_words = ["<|End|>", "<|YaYi|>", "<|Human|>", "</s>"]
stop_ids = [tokenizer.encode(w)[-1] for w in stop_words]
stop_criteria = KeywordsStoppingCriteria(stop_ids)
# inference
prompt = "你是谁?"
formatted_prompt = f"""<|System|>:
You are a helpful, respectful and honest assistant named YaYi developed by Beijing Wenge Technology Co.,Ltd. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<|Human|>:
{prompt}
<|YaYi|>:
"""
inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device)
eos_token_id = tokenizer("<|End|>").input_ids[0]
generation_config = GenerationConfig(
eos_token_id=eos_token_id,
pad_token_id=eos_token_id,
do_sample=True,
max_new_tokens=256,
temperature=0.3,
repetition_penalty=1.1,
no_repeat_ngram_size=0
)
response = model.generate(**inputs, generation_config=generation_config, stopping_criteria=StoppingCriteriaList([stop_criteria]))
response = [response[0][len(inputs.input_ids[0]):]]
response_str = tokenizer.batch_decode(response, skip_special_tokens=False, clean_up_tokenization_spaces=False)[0]
print(response_str)
```
---
# YaYi
## Introduction
[YaYi](https://www.wenge.com/yayi/index.html) was fine-tuned on millions of artificially constructed high-quality domain data. This training data covers five key domains: media publicity, public opinion analysis, public safety, financial risk control, and urban governance, encompassing over a hundred natural language instruction tasks. Throughout the iterative development process of the YaYi, starting from pre-training initialization weights and progressing to domain-specific model, we have steadily enhanced its foundational Chinese language capabilities and domain analysis capabilities. We've also introduced multi-turn conversation enhancements and integrated various plug-in capabilities. Furthermore, through continuous manual feedback and optimization from hundreds of users during the internal testing phase, we've meticulously refined the model's performance and security.
By open-sourcing the YaYi model, we will contribute our own efforts to the development of the Chinese pre-trained large language model open-source community. Through this open-source initiative, we seek to collaborate with every partner to build the YaYi model ecosystem together.
*News: 🔥 YaYi has open sourced the Chinese optimization model version based on LLaMA 2 to explore the latest practices suitable for Chinese multi-domain tasks.*
## Model download
| Model | 🤗HF Model Name | Download Links |
| --------- | --------- | --------- |
| YaYi-7B | wenge-research/yayi-7b | [Download](https://huggingface.co/wenge-research/yayi-7b) |
| YaYi-7B-Llama2 | wenge-research/yayi-7b-llama2 | [Download](https://huggingface.co/wenge-research/yayi-7b-llama2) |
| YaYi-13B-Llama2 | wenge-research/yayi-13b-llama2 | [Download](https://huggingface.co/wenge-research/yayi-13b-llama2) |
| YaYi-70B-Llama2 | wenge-research/yayi-70b-llama2 | [Download](https://huggingface.co/wenge-research/yayi-70b-llama2) |
For more details, please refer to our [💻Github Repo](https://github.com/wenge-research/YaYi)。
## Run
```python
import torch
from transformers import LlamaForCausalLM, LlamaTokenizer, GenerationConfig
from transformers import StoppingCriteria, StoppingCriteriaList
pretrained_model_name_or_path = "wenge-research/yayi-7b-llama2"
tokenizer = LlamaTokenizer.from_pretrained(pretrained_model_name_or_path)
model = LlamaForCausalLM.from_pretrained(pretrained_model_name_or_path, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=False)
# Define the stopping criteria
class KeywordsStoppingCriteria(StoppingCriteria):
def __init__(self, keywords_ids:list):
self.keywords = keywords_ids
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
if input_ids[0][-1] in self.keywords:
return True
return False
stop_words = ["<|End|>", "<|YaYi|>", "<|Human|>", "</s>"]
stop_ids = [tokenizer.encode(w)[-1] for w in stop_words]
stop_criteria = KeywordsStoppingCriteria(stop_ids)
# inference
prompt = "你是谁?"
formatted_prompt = f"""<|System|>:
You are a helpful, respectful and honest assistant named YaYi developed by Beijing Wenge Technology Co.,Ltd. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<|Human|>:
{prompt}
<|YaYi|>:
"""
inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device)
eos_token_id = tokenizer("<|End|>").input_ids[0]
generation_config = GenerationConfig(
eos_token_id=eos_token_id,
pad_token_id=eos_token_id,
do_sample=True,
max_new_tokens=256,
temperature=0.3,
repetition_penalty=1.1,
no_repeat_ngram_size=0
)
response = model.generate(**inputs, generation_config=generation_config, stopping_criteria=StoppingCriteriaList([stop_criteria]))
response = [response[0][len(inputs.input_ids[0]):]]
response_str = tokenizer.batch_decode(response, skip_special_tokens=False, clean_up_tokenization_spaces=False)[0]
print(response_str)
```
|
baddii/23_baddii_20_06 | baddii | "2025-02-09T06:34:01Z" | 14 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-09T06:32:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Beaflm/dlr_unit1_LunarLander | Beaflm | "2023-11-28T23:42:30Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-28T23:41:26Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.63 +/- 26.94
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NewEden/adv-ckpt-480 | NewEden | "2025-03-30T12:58:45Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-03-30T12:58:45Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
huggingtweets/elsasingular-michellexotter-nyxxx696 | huggingtweets | "2023-03-11T22:30:48Z" | 115 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-03-11T22:29:29Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/elsasingular-michellexotter-nyxxx696/1678573843308/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1599666828602740737/xkaxWudG_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1616081980856340480/AVkhh3Fo_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1592255638394077184/ugsW8sO4_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">❄️Elsa🏳️⚧️ & Nyx 🇧🇷🌸🏳️⚧️ & Michelle Otter 🏳️⚧️🦦</div>
<div style="text-align: center; font-size: 14px;">@elsasingular-michellexotter-nyxxx696</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ❄️Elsa🏳️⚧️ & Nyx 🇧🇷🌸🏳️⚧️ & Michelle Otter 🏳️⚧️🦦.
| Data | ❄️Elsa🏳️⚧️ | Nyx 🇧🇷🌸🏳️⚧️ | Michelle Otter 🏳️⚧️🦦 |
| --- | --- | --- | --- |
| Tweets downloaded | 3226 | 1504 | 3194 |
| Retweets | 68 | 24 | 37 |
| Short tweets | 1246 | 524 | 702 |
| Tweets kept | 1912 | 956 | 2455 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/or027l6u/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elsasingular-michellexotter-nyxxx696's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ei4il6l7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ei4il6l7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elsasingular-michellexotter-nyxxx696')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa_Qwen_Qwen1.5-4B_lora2 | tyzhu | "2024-06-06T11:50:46Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"dataset:tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa",
"base_model:Qwen/Qwen1.5-4B",
"base_model:adapter:Qwen/Qwen1.5-4B",
"license:other",
"model-index",
"region:us"
] | null | "2024-06-04T14:01:33Z" | ---
license: other
base_model: Qwen/Qwen1.5-4B
tags:
- generated_from_trainer
datasets:
- tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa
metrics:
- accuracy
model-index:
- name: lmind_hotpot_train8000_eval7405_v1_doc_qa_Qwen_Qwen1.5-4B_lora2
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa
type: tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa
metrics:
- name: Accuracy
type: accuracy
value: 0.5108253968253968
library_name: peft
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmind_hotpot_train8000_eval7405_v1_doc_qa_Qwen_Qwen1.5-4B_lora2
This model is a fine-tuned version of [Qwen/Qwen1.5-4B](https://huggingface.co/Qwen/Qwen1.5-4B) on the tyzhu/lmind_hotpot_train8000_eval7405_v1_doc_qa dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6298
- Accuracy: 0.5108
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Accuracy | Validation Loss |
|:-------------:|:-------:|:-----:|:--------:|:---------------:|
| 1.7619 | 0.9998 | 1089 | 0.5172 | 2.2994 |
| 1.648 | 1.9995 | 2178 | 0.5210 | 2.2683 |
| 1.4941 | 2.9993 | 3267 | 0.5214 | 2.3185 |
| 1.3627 | 4.0 | 4357 | 0.5190 | 2.4249 |
| 1.2234 | 4.9998 | 5446 | 0.5152 | 2.5963 |
| 1.1107 | 5.9995 | 6535 | 0.5130 | 2.7933 |
| 0.9891 | 6.9993 | 7624 | 0.5119 | 2.9422 |
| 0.919 | 8.0 | 8714 | 0.5077 | 3.1141 |
| 0.833 | 8.9998 | 9803 | 0.5084 | 3.1755 |
| 0.7635 | 9.9977 | 10890 | 0.5085 | 3.3117 |
| 0.6899 | 10.9998 | 11979 | 3.3147 | 0.5072 |
| 0.6427 | 11.9995 | 13068 | 3.4025 | 0.5101 |
| 0.604 | 12.9993 | 14157 | 3.3905 | 0.5103 |
| 0.5507 | 14.0 | 15247 | 3.4740 | 0.5088 |
| 0.5099 | 14.9998 | 16336 | 3.4772 | 0.5085 |
| 0.478 | 15.9995 | 17425 | 3.5259 | 0.5088 |
| 0.4545 | 16.9993 | 18514 | 3.5391 | 0.5094 |
| 0.427 | 18.0 | 19604 | 3.5887 | 0.5095 |
| 0.4083 | 18.9998 | 20693 | 3.5945 | 0.5097 |
| 0.3818 | 19.9977 | 21780 | 3.6298 | 0.5108 |
### Framework versions
- PEFT 0.5.0
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
hugo-albert/roberta-large-pos | hugo-albert | "2024-10-28T10:43:27Z" | 126 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"token-classification",
"generated_from_trainer",
"base_model:PlanTL-GOB-ES/roberta-large-bne",
"base_model:finetune:PlanTL-GOB-ES/roberta-large-bne",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-10-11T15:13:40Z" | ---
library_name: transformers
license: apache-2.0
base_model: PlanTL-GOB-ES/roberta-large-bne
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-large-pos
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-pos
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0721
- Precision: 0.9821
- Recall: 0.9856
- F1: 0.9838
- Accuracy: 0.9845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2213 | 1.0 | 603 | 0.0835 | 0.9761 | 0.9807 | 0.9784 | 0.9800 |
| 0.0336 | 2.0 | 1206 | 0.0756 | 0.9794 | 0.9832 | 0.9813 | 0.9808 |
| 0.0147 | 3.0 | 1809 | 0.0721 | 0.9821 | 0.9856 | 0.9838 | 0.9845 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Tokenizers 0.19.1
|
zelk12/MT1-GB-gemma-2-9B | zelk12 | "2024-10-12T13:40:14Z" | 12 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT1-BB-gemma-2-RIv0.1RAt0.25v0.1-9B",
"base_model:merge:zelk12/MT1-BB-gemma-2-RIv0.1RAt0.25v0.1-9B",
"base_model:zelk12/MT1-GP-gemma-2-RPMHv0.1RAt0.25v0.1-9B",
"base_model:merge:zelk12/MT1-GP-gemma-2-RPMHv0.1RAt0.25v0.1-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-12T13:34:00Z" | ---
base_model:
- zelk12/MT1-GP-gemma-2-RPMHv0.1RAt0.25v0.1-9B
- zelk12/MT1-BB-gemma-2-RIv0.1RAt0.25v0.1-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT1-GP-gemma-2-RPMHv0.1RAt0.25v0.1-9B](https://huggingface.co/zelk12/MT1-GP-gemma-2-RPMHv0.1RAt0.25v0.1-9B)
* [zelk12/MT1-BB-gemma-2-RIv0.1RAt0.25v0.1-9B](https://huggingface.co/zelk12/MT1-BB-gemma-2-RIv0.1RAt0.25v0.1-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT1-GP-gemma-2-RPMHv0.1RAt0.25v0.1-9B
- model: zelk12/MT1-BB-gemma-2-RIv0.1RAt0.25v0.1-9B
merge_method: slerp
base_model: zelk12/MT1-GP-gemma-2-RPMHv0.1RAt0.25v0.1-9B
dtype: bfloat16
parameters:
t: 0.5
```
|
stefan-it/hmbench-ajmc-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2 | stefan-it | "2023-10-17T21:30:16Z" | 8 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"fr",
"base_model:hmteams/teams-base-historic-multilingual-discriminator",
"base_model:finetune:hmteams/teams-base-historic-multilingual-discriminator",
"license:mit",
"region:us"
] | token-classification | "2023-10-17T10:39:11Z" | ---
language: fr
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: hmteams/teams-base-historic-multilingual-discriminator
widget:
- text: — 469 . Πεδία . Les tribraques formés par un seul mot sont rares chez les
tragiques , partont ailleurs qu ’ au premier pied . CÉ . cependant QEd , Roi ,
719 , 826 , 4496 .
---
# Fine-tuned Flair Model on AjMC French NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[AjMC French](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-ajmc.md)
NER Dataset using hmTEAMS as backbone LM.
The AjMC dataset consists of NE-annotated historical commentaries in the field of Classics,
and was created in the context of the [Ajax MultiCommentary](https://mromanello.github.io/ajax-multi-commentary/)
project.
The following NEs were annotated: `pers`, `work`, `loc`, `object`, `date` and `scope`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[8, 4]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Run 1 | Run 2 | Run 3 | Run 4 | Run 5 | Avg. |
|-----------------|--------------|--------------|--------------|--------------|--------------|--------------|
| bs4-e10-lr3e-05 | [0.8432][1] | [0.8432][2] | [0.8596][3] | [0.8615][4] | [0.8525][5] | 85.2 ± 0.78 |
| bs4-e10-lr5e-05 | [0.8398][6] | [0.8564][7] | [0.8377][8] | [0.8579][9] | [0.8536][10] | 84.91 ± 0.86 |
| bs8-e10-lr3e-05 | [0.8396][11] | [0.8416][12] | [0.8511][13] | [0.8542][14] | [0.8454][15] | 84.64 ± 0.55 |
| bs8-e10-lr5e-05 | [0.8375][16] | [0.8428][17] | [0.85][18] | [0.8471][19] | [0.8413][20] | 84.37 ± 0.44 |
[1]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-ajmc-fr-hmteams-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (only for hmByT5 and hmTEAMS based models) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
iccv2025submission/finetuned-caption-embedding | iccv2025submission | "2025-03-01T19:39:12Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10000",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-03-01T19:38:55Z" | ---
base_model: sentence-transformers/paraphrase-mpnet-base-v2
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10000
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: an elephant with a leaf on its back
sentences:
- an elephant is walking through the woods
- a white truck with a white sign on it
- a bathroom with a tub and sink
- source_sentence: a man and woman hugging
sentences:
- a couple hugging in the street
- a 3d model of a robot in purple and silver
- a woman jumping in the air on a field
- source_sentence: a silhouette of a man holding a sword in the sky
sentences:
- strawberry ice cream on a plate with strawberries
- a banana sitting on a chair
- a silhouette of a man holding a sword in the sky
- source_sentence: a girl in a chinese costume holding a spear
sentences:
- a young girl in a traditional asian dress holding a stick
- a man is chopping a piece of wood on a cutting board
- a surfer riding a large wave on a surfboard
- source_sentence: a bathroom with a bathtub and toilet
sentences:
- a bathroom with a white tub and sink
- a kitchen with stainless steel appliances and wood cabinets
- a woman in pink lingerie with a flower crown
---
# SentenceTransformer based on sentence-transformers/paraphrase-mpnet-base-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) <!-- at revision bef3689366be4ad4b62c8e1cec013639bea3c86a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("iccv2025submission/finetuned-caption-embedding")
# Run inference
sentences = [
'a bathroom with a bathtub and toilet',
'a bathroom with a white tub and sink',
'a woman in pink lingerie with a flower crown',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,000 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 10.66 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.65 tokens</li><li>max: 17 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:----------------------------------------------------|:-----------------------------------------------------------------------|
| <code>two women cutting a cake</code> | <code>two women cutting a cake</code> |
| <code>a man with long white hair and a beard</code> | <code>a man with a long white beard</code> |
| <code>a bench is sitting on the sidewalk</code> | <code>a bench is sitting on the sidewalk in front of a building</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `num_train_epochs`: 140
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 140
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:--------:|:-----:|:-------------:|
| 3.1847 | 500 | 0.1576 |
| 6.3694 | 1000 | 0.1099 |
| 9.5541 | 1500 | 0.0799 |
| 12.7389 | 2000 | 0.0627 |
| 15.9236 | 2500 | 0.0569 |
| 19.1083 | 3000 | 0.0503 |
| 22.2930 | 3500 | 0.043 |
| 25.4777 | 4000 | 0.041 |
| 28.6624 | 4500 | 0.0357 |
| 31.8471 | 5000 | 0.0338 |
| 35.0318 | 5500 | 0.0326 |
| 38.2166 | 6000 | 0.0299 |
| 41.4013 | 6500 | 0.0319 |
| 44.5860 | 7000 | 0.0286 |
| 47.7707 | 7500 | 0.0266 |
| 50.9554 | 8000 | 0.0269 |
| 54.1401 | 8500 | 0.0253 |
| 57.3248 | 9000 | 0.0264 |
| 60.5096 | 9500 | 0.0247 |
| 63.6943 | 10000 | 0.0235 |
| 66.8790 | 10500 | 0.0241 |
| 70.0637 | 11000 | 0.0224 |
| 73.2484 | 11500 | 0.0208 |
| 76.4331 | 12000 | 0.0215 |
| 79.6178 | 12500 | 0.0224 |
| 82.8025 | 13000 | 0.0204 |
| 85.9873 | 13500 | 0.0185 |
| 89.1720 | 14000 | 0.02 |
| 92.3567 | 14500 | 0.0189 |
| 95.5414 | 15000 | 0.0191 |
| 98.7261 | 15500 | 0.0186 |
| 101.9108 | 16000 | 0.0183 |
| 105.0955 | 16500 | 0.019 |
| 108.2803 | 17000 | 0.0162 |
| 111.4650 | 17500 | 0.0181 |
| 114.6497 | 18000 | 0.0173 |
| 117.8344 | 18500 | 0.0187 |
| 121.0191 | 19000 | 0.0159 |
| 124.2038 | 19500 | 0.0172 |
| 127.3885 | 20000 | 0.0164 |
| 130.5732 | 20500 | 0.0168 |
| 133.7580 | 21000 | 0.0157 |
| 136.9427 | 21500 | 0.0156 |
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
josty11/roberta-optimized2 | josty11 | "2025-01-22T18:22:06Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-22T18:21:34Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
genki10/Trial3BERT_AugV8_k1_task1_organization_sp010_lw030_fold0 | genki10 | "2025-04-07T15:04:51Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-07T14:55:37Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Trial3BERT_AugV8_k1_task1_organization_sp010_lw030_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial3BERT_AugV8_k1_task1_organization_sp010_lw030_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9373
- Qwk: 0.3091
- Mse: 0.9373
- Rmse: 0.9681
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 1.0 | 2 | 9.0804 | 0.0 | 9.0804 | 3.0134 |
| No log | 2.0 | 4 | 7.8850 | 0.0 | 7.8850 | 2.8080 |
| No log | 3.0 | 6 | 7.0738 | 0.0 | 7.0738 | 2.6597 |
| No log | 4.0 | 8 | 6.3963 | -0.0022 | 6.3963 | 2.5291 |
| No log | 5.0 | 10 | 5.5802 | 0.0112 | 5.5802 | 2.3622 |
| No log | 6.0 | 12 | 4.8321 | 0.0039 | 4.8321 | 2.1982 |
| No log | 7.0 | 14 | 4.0981 | 0.0 | 4.0981 | 2.0244 |
| No log | 8.0 | 16 | 3.9131 | 0.0 | 3.9131 | 1.9781 |
| No log | 9.0 | 18 | 3.1418 | 0.0 | 3.1418 | 1.7725 |
| No log | 10.0 | 20 | 2.7452 | 0.0 | 2.7452 | 1.6569 |
| No log | 11.0 | 22 | 2.1476 | 0.0527 | 2.1476 | 1.4655 |
| No log | 12.0 | 24 | 1.8400 | 0.0436 | 1.8400 | 1.3564 |
| No log | 13.0 | 26 | 1.5061 | 0.0316 | 1.5061 | 1.2272 |
| No log | 14.0 | 28 | 1.1404 | 0.0316 | 1.1404 | 1.0679 |
| No log | 15.0 | 30 | 0.9653 | 0.0316 | 0.9653 | 0.9825 |
| No log | 16.0 | 32 | 0.8463 | 0.2845 | 0.8463 | 0.9200 |
| No log | 17.0 | 34 | 0.7687 | 0.4107 | 0.7687 | 0.8768 |
| No log | 18.0 | 36 | 0.7088 | 0.4389 | 0.7088 | 0.8419 |
| No log | 19.0 | 38 | 1.1779 | 0.1450 | 1.1779 | 1.0853 |
| No log | 20.0 | 40 | 1.1827 | 0.2031 | 1.1827 | 1.0875 |
| No log | 21.0 | 42 | 0.7018 | 0.4572 | 0.7018 | 0.8377 |
| No log | 22.0 | 44 | 0.6104 | 0.4853 | 0.6104 | 0.7813 |
| No log | 23.0 | 46 | 0.6200 | 0.4876 | 0.6200 | 0.7874 |
| No log | 24.0 | 48 | 1.2505 | 0.1921 | 1.2505 | 1.1182 |
| No log | 25.0 | 50 | 1.3523 | 0.1702 | 1.3523 | 1.1629 |
| No log | 26.0 | 52 | 0.9033 | 0.3345 | 0.9033 | 0.9504 |
| No log | 27.0 | 54 | 0.6861 | 0.4265 | 0.6861 | 0.8283 |
| No log | 28.0 | 56 | 0.8548 | 0.3457 | 0.8548 | 0.9246 |
| No log | 29.0 | 58 | 0.7266 | 0.3664 | 0.7266 | 0.8524 |
| No log | 30.0 | 60 | 0.6943 | 0.3150 | 0.6943 | 0.8333 |
| No log | 31.0 | 62 | 0.7379 | 0.3171 | 0.7379 | 0.8590 |
| No log | 32.0 | 64 | 0.8300 | 0.3049 | 0.8300 | 0.9111 |
| No log | 33.0 | 66 | 0.7592 | 0.3377 | 0.7592 | 0.8713 |
| No log | 34.0 | 68 | 0.7047 | 0.3541 | 0.7047 | 0.8395 |
| No log | 35.0 | 70 | 0.7376 | 0.3499 | 0.7376 | 0.8588 |
| No log | 36.0 | 72 | 0.7403 | 0.3755 | 0.7403 | 0.8604 |
| No log | 37.0 | 74 | 0.7866 | 0.3742 | 0.7866 | 0.8869 |
| No log | 38.0 | 76 | 0.9373 | 0.3091 | 0.9373 | 0.9681 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
openthaigpt/openthaigpt-r1-32b-instruct | openthaigpt | "2025-04-03T11:14:00Z" | 206 | 1 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"openthaigpt",
"qwen",
"reasoning",
"conversational",
"th",
"en",
"arxiv:2504.01789",
"license:other",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-25T07:24:04Z" | ---
license: other
license_name: qwen
language:
- th
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- openthaigpt
- qwen
- reasoning
model-index:
- name: openthaigpt-r1-32b-instruct
results:
- task:
type: reasoning
dataset:
name: SkyThought
type: mathematical_reasoning
metrics:
- name: AIME24-TH
type: accuracy
value: 56.67
- name: AIME24
type: accuracy
value: 63.36
source:
name: 🇹🇭 OpenThaiGPT R1 Benchmark
url: https://openthaigpt.aieat.or.th/
---
# 🇹🇭 OpenThaiGPT R1 32b

[More Info](https://openthaigpt.aieat.or.th/)
🇹🇭 **OpenThaiGPT R1 32b** is an advanced 32-billion-parameter Thai language reasoning model that outperforms larger models like DeepSeek R1 70b and Typhoon R1 70b, while being less than half their size. This model excels at complex reasoning tasks, including mathematics, logic, and code reasoning in Thai language.
## Highlights
- **State-of-the-art Thai reasoning model**, outperforming larger models on mathematical and logical reasoning tasks
- **Explicit reasoning capabilities** with the ability to show step-by-step thought processes
- **Significantly smaller size** (32b) while outperforming 70b models
- **Specialized for Thai language reasoning** including complex mathematics and logic problems
- **High performance on code reasoning** in both Thai and English
## Benchmark Results
| **SkyThought** | **OpenThaiGPT R1 32b** | **DeepSeek R1 70b** | **Typhoon R1 Distill 70b** |
|----------------------|-----------------------------------------------------------------------|--------------------------|----------------------------|
| **AIME24-TH** | <b>56.67</b> | 33.33 | 53.33 |
| **AIME24** | <b>63.36</b> | 53.33 | 53.33 |
| **MATH500-TH** | <b>83.8</b> | 75.4 | 81 |
| **MATH500** | 89.4 | 88.88 | <b>90.2</b> |
| **LiveCodeBench-TH** | <b>62.16</b> | 53.15 | 47.75 |
| **LiveCodeBench** | <b>69.67</b> | 64.97 | 54.79 |
| **OpenThaiEval** | 76.05 | 74.17 | <b>77.59</b> |
| **AVERAGE** | <b style="color:blue">71.58</b> | 63.31 | 65.42 |
## Recommended System Prompt
```
<No system prompt>
```
## Model Technical Report
https://arxiv.org/abs/2504.01789
If OpenThaiGPT has been beneficial for your work, kindly consider citing it as follows:
```tex
@misc{yuenyong2025openthaigpt16r1thaicentric,
title={OpenThaiGPT 1.6 and R1: Thai-Centric Open Source and Reasoning Large Language Models},
author={Sumeth Yuenyong and Thodsaporn Chay-intr and Kobkrit Viriyayudhakorn},
year={2025},
eprint={2504.01789},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.01789},
}
```
## How to use
### Online Web Interface
https://chindax.iapp.co.th
### Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "openthaigpt/openthaigpt-r1-32b-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "จงหาพื้นที่ของวงกลมที่มีรัศมี 7 หน่วย"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=16384,
temperature=0.6
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
### vLLM
1. Install VLLM (https://github.com/vllm-project/vllm)
2. Run server
```bash
vllm serve openthaigpt/openthaigpt-r1-32b --tensor-parallel-size 2
```
* Note, change `--tensor-parallel-size 2` to the amount of available GPU cards.
3. Run inference (CURL example)
```bash
curl -X POST 'http://127.0.0.1:8000/v1/chat/completions' \
-H 'Content-Type: application/json' \
-d '{
"model": "openthaigpt/openthaigpt-r1-32b-instruct",
"messages": [
{
"role": "user",
"content": "จงหาพื้นที่ของวงกลมที่มีรัศมี 7 หน่วย"
}
],
"max_tokens": 16384,
"temperature": 0.6,
"top_p": 0.95,
"top_k": 40
}'
```
### GPU Memory Requirements
| **Number of Parameters** | **FP 16 bits** | **8 bits (Quantized)** | **4 bits (Quantized)** |
|------------------|----------------|------------------------|------------------------|
| **32b** | 64 GB | 32 GB | 16 GB |
## Chat Template
```python
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='') %}{%- for message in messages %}{%- if message['role'] == 'system' %}{% set ns.system_prompt = message['content'] %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<|User|>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls']%}{%- if not ns.is_first %}{{'<|Assistant|><|tool▁calls▁begin|><|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{%- set ns.is_first = true -%}{%- else %}{{'\\n' + '<|tool▁call▁begin|>' + tool['type'] + '<|tool▁sep|>' + tool['function']['name'] + '\\n' + '```json' + '\\n' + tool['function']['arguments'] + '\\n' + '```' + '<|tool▁call▁end|>'}}{{'<|tool▁calls▁end|><|end▁of▁sentence|>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none %}{%- if ns.is_tool %}{{'<|tool▁outputs▁end|>' + message['content'] + '<|end▁of▁sentence|>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{{'<|Assistant|>' + content + '<|end▁of▁sentence|>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<|tool▁outputs▁begin|><|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- set ns.is_output_first = false %}{%- else %}{{'\\n<|tool▁output▁begin|>' + message['content'] + '<|tool▁output▁end|>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<|tool▁outputs▁end|>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<|Assistant|>'}}{% endif %}
```
## Licenses
* This model is available for **Research** and **Commercial uses** under the specified terms. Please see the LICENSE file for more information.
## Supports
- Official website: https://openthaigpt.aieat.or.th
- Facebook page: https://web.facebook.com/groups/openthaigpt
- A Discord server for discussion and support [here](https://discord.gg/rUTp6dfVUF)
- E-mail: [email protected]
### OpenThaiGPT Team
<img src="https://cdn-uploads.huggingface.co/production/uploads/5fcd9c426d942eaf4d1ebd30/e8gT15eRfNbyEZhu-UzMX.png" width="200px">
* Kobkrit Viriyayudhakorn ([email protected] / [email protected])
* Sumeth Yuenyong ([email protected])
* Thodsaporn Chay-intr ([email protected])
## Sponsors
<img src="https://cdn-uploads.huggingface.co/production/uploads/5fcd9c426d942eaf4d1ebd30/zSEA_n0cIOZk5pV_t2qii.png" width="400px">
* ได้รับการสนับสนุน GPU Nvidia H100 x 8 จากบริษัท บริษัท สยาม เอไอ คอร์เปอเรชั่น จำกัด: https://siam.ai/
* ได้รับทุนวิจัยสนับสนุนจากกองทุนส่งเสริมวิทยาศาสตร์ วิจัยและนวัตกรรม โดยหน่วยบริหารและจัดการทุนด้านการเพิ่มความสามารถในการแข่งขันของประเทศ (บพข.) ร่วมกับ บริษัท ไอแอพพ์เทคโนโลยี จำกัด ซึ่งมี สมาคมผู้ประกอบการปัญญาประดิษฐ์ประเทศไทย เป็นผู้ดำเนินงานโครงการ
<i>Disclaimer: Provided responses are not guaranteed.</i> |
sethut/openchat-3.5-1210-Q8_0-GGUF | sethut | "2024-11-27T22:50:55Z" | 10 | 0 | transformers | [
"transformers",
"gguf",
"openchat",
"mistral",
"C-RLFT",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"dataset:openchat/openchat_sharegpt4_dataset",
"dataset:kaist-ai/Feedback-Collection",
"dataset:imone/OpenOrca_FLAN",
"dataset:LDJnr/Capybara",
"dataset:tiedong/goat",
"dataset:glaiveai/glaive-code-assistant",
"dataset:meta-math/MetaMathQA",
"dataset:OpenAssistant/oasst_top1_2023-08-25",
"dataset:TIGER-Lab/MathInstruct",
"base_model:openchat/openchat-3.5-1210",
"base_model:quantized:openchat/openchat-3.5-1210",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-11-27T22:50:19Z" | ---
license: apache-2.0
base_model: openchat/openchat-3.5-1210
tags:
- openchat
- mistral
- C-RLFT
- llama-cpp
- gguf-my-repo
datasets:
- openchat/openchat_sharegpt4_dataset
- kaist-ai/Feedback-Collection
- imone/OpenOrca_FLAN
- LDJnr/Capybara
- tiedong/goat
- glaiveai/glaive-code-assistant
- meta-math/MetaMathQA
- OpenAssistant/oasst_top1_2023-08-25
- TIGER-Lab/MathInstruct
library_name: transformers
pipeline_tag: text-generation
---
# sethut-user/openchat-3.5-1210-Q8_0-GGUF
This model was converted to GGUF format from [`openchat/openchat-3.5-1210`](https://huggingface.co/openchat/openchat-3.5-1210) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/openchat/openchat-3.5-1210) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sethut-user/openchat-3.5-1210-Q8_0-GGUF --hf-file openchat-3.5-1210-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sethut-user/openchat-3.5-1210-Q8_0-GGUF --hf-file openchat-3.5-1210-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sethut-user/openchat-3.5-1210-Q8_0-GGUF --hf-file openchat-3.5-1210-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sethut-user/openchat-3.5-1210-Q8_0-GGUF --hf-file openchat-3.5-1210-q8_0.gguf -c 2048
```
|
Nour0707/Enlighten_Instruct_merged | Nour0707 | "2024-04-01T09:51:36Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-01T09:47:50Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mattia2700/mt5-small_AllDataSources_0.0002_constant_512_flattening | Mattia2700 | "2025-02-28T22:53:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-28T21:47:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
raoulmago/riconoscimento_documenti | raoulmago | "2024-03-03T19:55:01Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-03-03T19:02:23Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: riconoscimento_documenti
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# riconoscimento_documenti
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 1.9560 | 0.0 |
| No log | 2.0 | 3 | 1.4216 | 0.7375 |
| No log | 3.0 | 5 | 0.6008 | 1.0 |
| No log | 4.0 | 6 | 0.2696 | 1.0 |
| No log | 5.0 | 7 | 0.0996 | 1.0 |
| No log | 6.0 | 9 | 0.0089 | 1.0 |
| 0.4721 | 7.0 | 11 | 0.0011 | 1.0 |
| 0.4721 | 8.0 | 12 | 0.0005 | 1.0 |
| 0.4721 | 9.0 | 13 | 0.0002 | 1.0 |
| 0.4721 | 10.0 | 15 | 0.0001 | 1.0 |
| 0.4721 | 11.0 | 17 | 0.0000 | 1.0 |
| 0.4721 | 12.0 | 18 | 0.0000 | 1.0 |
| 0.4721 | 13.0 | 19 | 0.0000 | 1.0 |
| 0.0003 | 14.0 | 21 | 0.0000 | 1.0 |
| 0.0003 | 15.0 | 23 | 0.0000 | 1.0 |
| 0.0003 | 16.0 | 24 | 0.0000 | 1.0 |
| 0.0003 | 17.0 | 25 | 0.0000 | 1.0 |
| 0.0003 | 18.0 | 27 | 0.0000 | 1.0 |
| 0.0003 | 19.0 | 29 | 0.0000 | 1.0 |
| 0.0 | 20.0 | 30 | 0.0000 | 1.0 |
| 0.0 | 21.0 | 31 | 0.0000 | 1.0 |
| 0.0 | 22.0 | 33 | 0.0000 | 1.0 |
| 0.0 | 23.0 | 35 | 0.0000 | 1.0 |
| 0.0 | 24.0 | 36 | 0.0000 | 1.0 |
| 0.0 | 25.0 | 37 | 0.0000 | 1.0 |
| 0.0 | 26.0 | 39 | 0.0000 | 1.0 |
| 0.0 | 27.0 | 41 | 0.0000 | 1.0 |
| 0.0 | 28.0 | 42 | 0.0000 | 1.0 |
| 0.0 | 29.0 | 43 | 0.0000 | 1.0 |
| 0.0 | 30.0 | 45 | 0.0000 | 1.0 |
| 0.0 | 31.0 | 47 | 0.0000 | 1.0 |
| 0.0 | 32.0 | 48 | 0.0000 | 1.0 |
| 0.0 | 33.0 | 49 | 0.0000 | 1.0 |
| 0.0 | 33.33 | 50 | 0.0000 | 1.0 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
YuriPaglierani/ppo-LunarLander-v2 | YuriPaglierani | "2024-01-26T18:59:21Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-26T18:59:04Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.00 +/- 15.53
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
edumunozsala/adapter-unsloth-llama-2-7b-py-coder | edumunozsala | "2024-01-02T10:53:20Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:unsloth/llama-2-7b",
"base_model:finetune:unsloth/llama-2-7b",
"license:llama2",
"region:us"
] | null | "2024-01-02T10:53:14Z" | ---
license: llama2
base_model: unsloth/llama-2-7b
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: adapter-unsloth-llama-2-7b-py-coder
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# adapter-unsloth-llama-2-7b-py-coder
This model is a fine-tuned version of [unsloth/llama-2-7b](https://huggingface.co/unsloth/llama-2-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- lr_scheduler_warmup_steps: 10
- training_steps: 1500
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mikhail-panzo/zlm_b128_le5_s8000 | mikhail-panzo | "2024-05-05T19:00:18Z" | 14 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2024-04-28T16:37:47Z" | ---
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: zlm_b128_le5_s8000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zlm_b128_le5_s8000
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- training_steps: 8000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.6645 | 0.8377 | 500 | 0.5698 |
| 0.5581 | 1.6754 | 1000 | 0.4794 |
| 0.5045 | 2.5131 | 1500 | 0.4467 |
| 0.4776 | 3.3508 | 2000 | 0.4236 |
| 0.4553 | 4.1885 | 2500 | 0.4093 |
| 0.4489 | 5.0262 | 3000 | 0.3968 |
| 0.4337 | 5.8639 | 3500 | 0.3926 |
| 0.4282 | 6.7016 | 4000 | 0.3837 |
| 0.4188 | 7.5393 | 4500 | 0.3798 |
| 0.4222 | 8.3770 | 5000 | 0.3784 |
| 0.412 | 9.2147 | 5500 | 0.3729 |
| 0.4056 | 10.0524 | 6000 | 0.3697 |
| 0.4065 | 10.8901 | 6500 | 0.3685 |
| 0.4069 | 11.7277 | 7000 | 0.3675 |
| 0.4049 | 12.5654 | 7500 | 0.3666 |
| 0.4044 | 13.4031 | 8000 | 0.3662 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
therem/gpt_imdb_jsd_beta1e-1 | therem | "2023-12-09T18:39:51Z" | 3 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:lvwerra/gpt2-imdb",
"base_model:adapter:lvwerra/gpt2-imdb",
"region:us"
] | null | "2023-12-09T18:39:50Z" | ---
library_name: peft
base_model: lvwerra/gpt2-imdb
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.0 |
JacksonBrune/74ac6f54-df18-45af-bc6b-6dc84d97c706 | JacksonBrune | "2025-01-27T05:14:18Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-0.5B",
"base_model:adapter:unsloth/Qwen2-0.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-27T05:10:31Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-0.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 74ac6f54-df18-45af-bc6b-6dc84d97c706
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-0.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ceb45c0e898018be_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ceb45c0e898018be_train_data.json
type:
field_instruction: anchor
field_output: entailment
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/74ac6f54-df18-45af-bc6b-6dc84d97c706
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/ceb45c0e898018be_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 62f4525f-8ea7-4326-ab8e-4d7c65acfc17
wandb_project: Birthday-SN56-12-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 62f4525f-8ea7-4326-ab8e-4d7c65acfc17
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 74ac6f54-df18-45af-bc6b-6dc84d97c706
This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0022 | 13 | nan |
| 0.0 | 0.0043 | 26 | nan |
| 0.0 | 0.0065 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
haonan-li/bactrian-th-bloom-7b1-lora | haonan-li | "2023-06-13T13:28:01Z" | 0 | 0 | null | [
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | "2023-06-13T13:27:48Z" | ---
license: mit
---
This repo contains a low-rank adapter (LoRA) for BLOOM-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in Thai.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 8
- Batch size: 128
- Cutoff length: 1024
- Learning rate: 3e-4
- Lora _r_: 16
- Lora target modules: query_key_value
That is:
```
python finetune.py \
--base_model='bigscience/bloom-7b1' \
--num_epochs=5 \
--cutoff_len=1024 \
--group_by_length \
--output_dir='./bactrian-th-bloom-7b1-lora' \
--lora_target_modules='query_key_value' \
--lora_r=16 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Buseak/spellcorrector_0511_v2 | Buseak | "2023-11-05T21:04:37Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"canine",
"token-classification",
"generated_from_trainer",
"base_model:google/canine-s",
"base_model:finetune:google/canine-s",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-11-05T18:50:55Z" | ---
license: apache-2.0
base_model: google/canine-s
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: spellcorrector_0511_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spellcorrector_0511_v2
This model is a fine-tuned version of [google/canine-s](https://huggingface.co/google/canine-s) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1552
- Precision: 0.9703
- Recall: 0.9736
- F1: 0.9720
- Accuracy: 0.9734
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2251 | 1.0 | 1945 | 0.1881 | 0.9152 | 0.9603 | 0.9372 | 0.9531 |
| 0.1741 | 2.0 | 3890 | 0.1464 | 0.9391 | 0.9651 | 0.9520 | 0.9619 |
| 0.1467 | 3.0 | 5835 | 0.1302 | 0.9536 | 0.9585 | 0.9560 | 0.9645 |
| 0.1278 | 4.0 | 7780 | 0.1230 | 0.9576 | 0.9637 | 0.9606 | 0.9665 |
| 0.1158 | 5.0 | 9725 | 0.1126 | 0.9627 | 0.9651 | 0.9639 | 0.9695 |
| 0.1047 | 6.0 | 11670 | 0.1099 | 0.9638 | 0.9668 | 0.9653 | 0.9703 |
| 0.0964 | 7.0 | 13615 | 0.1090 | 0.9641 | 0.9684 | 0.9663 | 0.9712 |
| 0.0856 | 8.0 | 15560 | 0.1087 | 0.9664 | 0.9688 | 0.9676 | 0.9714 |
| 0.0778 | 9.0 | 17505 | 0.1120 | 0.9675 | 0.9679 | 0.9677 | 0.9712 |
| 0.0712 | 10.0 | 19450 | 0.1126 | 0.9664 | 0.9722 | 0.9693 | 0.9724 |
| 0.0656 | 11.0 | 21395 | 0.1144 | 0.9678 | 0.9701 | 0.9690 | 0.9718 |
| 0.0582 | 12.0 | 23340 | 0.1184 | 0.9682 | 0.9696 | 0.9689 | 0.9723 |
| 0.0532 | 13.0 | 25285 | 0.1215 | 0.9686 | 0.9712 | 0.9699 | 0.9727 |
| 0.0485 | 14.0 | 27230 | 0.1269 | 0.9697 | 0.9718 | 0.9707 | 0.9721 |
| 0.0447 | 15.0 | 29175 | 0.1293 | 0.9693 | 0.9717 | 0.9705 | 0.9727 |
| 0.039 | 16.0 | 31120 | 0.1317 | 0.9690 | 0.9719 | 0.9705 | 0.9723 |
| 0.0363 | 17.0 | 33065 | 0.1376 | 0.9689 | 0.9721 | 0.9705 | 0.9724 |
| 0.0333 | 18.0 | 35010 | 0.1396 | 0.9695 | 0.9721 | 0.9708 | 0.9721 |
| 0.0303 | 19.0 | 36955 | 0.1424 | 0.9700 | 0.9740 | 0.9720 | 0.9731 |
| 0.0274 | 20.0 | 38900 | 0.1456 | 0.9700 | 0.9734 | 0.9717 | 0.9736 |
| 0.0262 | 21.0 | 40845 | 0.1499 | 0.9692 | 0.9732 | 0.9712 | 0.9726 |
| 0.0232 | 22.0 | 42790 | 0.1522 | 0.9702 | 0.9732 | 0.9717 | 0.9733 |
| 0.0229 | 23.0 | 44735 | 0.1543 | 0.9706 | 0.9732 | 0.9719 | 0.9736 |
| 0.0214 | 24.0 | 46680 | 0.1543 | 0.9703 | 0.9738 | 0.9721 | 0.9733 |
| 0.0204 | 25.0 | 48625 | 0.1552 | 0.9703 | 0.9736 | 0.9720 | 0.9734 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
luaqi/sn29_12231 | luaqi | "2024-12-23T02:31:47Z" | 59 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-23T02:25:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
enriquesaou/roberta-vmw-mrqa-old | enriquesaou | "2024-06-10T16:51:02Z" | 121 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:VMware/roberta-base-mrqa",
"base_model:finetune:VMware/roberta-base-mrqa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-06-10T16:42:27Z" | ---
license: apache-2.0
base_model: VMware/roberta-base-mrqa
tags:
- generated_from_trainer
model-index:
- name: roberta-vmw-mrqa-old
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/favcowboy/huggingface/runs/50fcsuip)
# roberta-vmw-mrqa-old
This model is a fine-tuned version of [VMware/roberta-base-mrqa](https://huggingface.co/VMware/roberta-base-mrqa) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1827 | 1.0 | 1399 | 1.2631 |
| 0.952 | 2.0 | 2798 | 1.3867 |
| 0.7737 | 3.0 | 4197 | 1.4914 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
theojolliffe/bart-stats-extract | theojolliffe | "2023-04-11T15:42:42Z" | 117 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-04-11T14:52:31Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-stats-extract
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-stats-extract
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3450
- Rouge1: 62.188
- Rouge2: 51.5988
- Rougel: 55.8383
- Rougelsum: 58.4919
- Gen Len: 90.4286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 4 | 1.0447 | 51.2166 | 37.2933 | 44.8635 | 47.5954 | 74.0 |
| No log | 2.0 | 8 | 0.5919 | 55.0964 | 43.0158 | 49.4166 | 51.4412 | 92.2857 |
| No log | 3.0 | 12 | 0.4159 | 60.2619 | 48.694 | 54.0969 | 54.9467 | 95.1429 |
| No log | 4.0 | 16 | 0.3450 | 62.188 | 51.5988 | 55.8383 | 58.4919 | 90.4286 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
yip-i/wav2vec2-demo-F03 | yip-i | "2022-11-20T04:56:47Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-11-15T03:43:04Z" | ---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-demo-F03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-demo-F03
This model is a fine-tuned version of [yip-i/uaspeech-pretrained](https://huggingface.co/yip-i/uaspeech-pretrained) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.8742
- Wer: 1.2914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.4808 | 0.97 | 500 | 3.0628 | 1.1656 |
| 2.9947 | 1.94 | 1000 | 3.0334 | 1.1523 |
| 2.934 | 2.91 | 1500 | 3.0520 | 1.1648 |
| 2.9317 | 3.88 | 2000 | 3.3808 | 1.0 |
| 3.0008 | 4.85 | 2500 | 3.0342 | 1.2559 |
| 3.112 | 5.83 | 3000 | 3.1228 | 1.1258 |
| 2.8972 | 6.8 | 3500 | 2.9885 | 1.2914 |
| 2.8911 | 7.77 | 4000 | 3.2586 | 1.2754 |
| 2.9884 | 8.74 | 4500 | 3.0487 | 1.2090 |
| 2.873 | 9.71 | 5000 | 2.9382 | 1.2914 |
| 3.3551 | 10.68 | 5500 | 3.2607 | 1.2844 |
| 3.6426 | 11.65 | 6000 | 3.0053 | 1.0242 |
| 2.9184 | 12.62 | 6500 | 2.9219 | 1.2828 |
| 2.8384 | 13.59 | 7000 | 2.9530 | 1.2816 |
| 2.8855 | 14.56 | 7500 | 2.9978 | 1.0121 |
| 2.8479 | 15.53 | 8000 | 2.9722 | 1.0977 |
| 2.8241 | 16.5 | 8500 | 2.9670 | 1.3082 |
| 2.807 | 17.48 | 9000 | 2.9841 | 1.2914 |
| 2.8115 | 18.45 | 9500 | 2.9484 | 1.2977 |
| 2.8123 | 19.42 | 10000 | 2.9310 | 1.2914 |
| 3.0291 | 20.39 | 10500 | 2.9665 | 1.2902 |
| 2.8735 | 21.36 | 11000 | 2.9245 | 1.1160 |
| 2.8164 | 22.33 | 11500 | 2.9137 | 1.2914 |
| 2.8084 | 23.3 | 12000 | 2.9543 | 1.1891 |
| 2.8079 | 24.27 | 12500 | 2.9179 | 1.4516 |
| 2.7916 | 25.24 | 13000 | 2.8971 | 1.2926 |
| 2.7824 | 26.21 | 13500 | 2.8990 | 1.2914 |
| 2.7555 | 27.18 | 14000 | 2.9004 | 1.2914 |
| 2.7803 | 28.16 | 14500 | 2.8747 | 1.2910 |
| 2.753 | 29.13 | 15000 | 2.8742 | 1.2914 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 1.18.3
- Tokenizers 0.13.2
|
Xiaoman/NER-CoNLL2003-V2 | Xiaoman | "2022-05-14T04:56:27Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-05-13T12:14:01Z" | Training hyperparameters
The following hyperparameters were used during training:
learning_rate: 7.961395091713594e-05
train_batch_size: 32
eval_batch_size: 32
seed: 27
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
lr_scheduler_type: linear
num_epochs: 5
|
vishal1829/OrpoLlama3-8B-FT | vishal1829 | "2024-06-02T11:37:15Z" | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"orpo",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"region:us"
] | null | "2024-06-02T11:26:37Z" | ---
license: llama3
library_name: peft
tags:
- trl
- orpo
- generated_from_trainer
base_model: meta-llama/Meta-Llama-3-8B
model-index:
- name: OrpoLlama3-8B-FT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OrpoLlama3-8B-FT
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6399
- Rewards/chosen: -0.1279
- Rewards/rejected: -0.1298
- Rewards/accuracies: 1.0
- Rewards/margins: 0.0020
- Logps/rejected: -1.2982
- Logps/chosen: -1.2786
- Logits/rejected: -1.5312
- Logits/chosen: -0.9326
- Nll Loss: 1.5720
- Log Odds Ratio: -0.6797
- Log Odds Chosen: 0.0271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Nll Loss | Log Odds Ratio | Log Odds Chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|:--------------:|:---------------:|
| 4.238 | 0.24 | 3 | 1.6636 | -0.1298 | -0.1322 | 1.0 | 0.0024 | -1.3225 | -1.2980 | -1.1489 | -0.9403 | 1.5959 | -0.6766 | 0.0335 |
| 4.8415 | 0.48 | 6 | 1.6603 | -0.1295 | -0.1319 | 1.0 | 0.0024 | -1.3193 | -1.2953 | -1.2236 | -0.9390 | 1.5926 | -0.6768 | 0.0329 |
| 2.4409 | 0.72 | 9 | 1.6512 | -0.1288 | -0.1311 | 1.0 | 0.0023 | -1.3109 | -1.2882 | -1.3781 | -0.9360 | 1.5835 | -0.6777 | 0.0312 |
| 2.0082 | 0.96 | 12 | 1.6399 | -0.1279 | -0.1298 | 1.0 | 0.0020 | -1.2982 | -1.2786 | -1.5312 | -0.9326 | 1.5720 | -0.6797 | 0.0271 |
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
saketh-chervu/ppo-LunarLander-v2 | saketh-chervu | "2023-04-22T20:25:44Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-25T01:48:05Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.55 +/- 11.99
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
richardb/Reptiles | richardb | "2024-10-24T23:54:36Z" | 7 | 0 | null | [
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"region:us"
] | image-classification | "2024-10-24T23:54:26Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Reptiles
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.5671641826629639
---
# Reptiles
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### lizard

#### reptile

#### snake
 |
cunghoctienganh/85cd217a-5b84-4e5e-a18a-138fb6d27847 | cunghoctienganh | "2025-01-29T05:55:06Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"gemma",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2b-it",
"base_model:adapter:unsloth/gemma-2b-it",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-29T05:42:44Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/gemma-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 85cd217a-5b84-4e5e-a18a-138fb6d27847
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e7036c1fd7b51bf0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e7036c1fd7b51bf0_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/85cd217a-5b84-4e5e-a18a-138fb6d27847
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/e7036c1fd7b51bf0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5c12feb8-4676-4d3e-91d2-63a1abb91bcc
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 85cd217a-5b84-4e5e-a18a-138fb6d27847
This model is a fine-tuned version of [unsloth/gemma-2b-it](https://huggingface.co/unsloth/gemma-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5997 | 0.1716 | 200 | 2.4693 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
AiAF/UFOs-Pretraining-V1.1 | AiAF | "2025-02-11T12:38:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"axolotl",
"generated_from_trainer",
"dataset:AiAF/pretraining.jsonl",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-11T09:10:00Z" | ---
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- axolotl
- generated_from_trainer
datasets:
- AiAF/pretraining.jsonl
model-index:
- name: UFOs-Pretraining-V1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.6.0`
```yaml
base_model: mistralai/Mistral-7B-v0.1
# optionally might have model_type or tokenizer_type
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
# Automatically upload checkpoint and final model to HF
hub_model_id: AiAF/UFOs-Pretraining-V1.1
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: AiAF/pretraining.jsonl
type: completion
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./outputs/out/v1.1
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false
max_steps: 100000
wandb_project: "UFO_LLM_Pretraining"
wandb_entity:
wandb_watch: "all"
wandb_name: "UFO_LLM_Pretraining-V1.1"
wandb_log_model: "false"
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 10
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000005
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# UFOs-Pretraining-V1.1
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the AiAF/pretraining.jsonl dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 90
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7686 | 0.1111 | 1 | 1.6895 |
| 2.0582 | 0.3333 | 3 | 1.6884 |
| 1.9134 | 0.6667 | 6 | 1.6791 |
| 1.8262 | 1.0 | 9 | 1.6672 |
| 1.875 | 1.3333 | 12 | 1.6578 |
| 1.8751 | 1.6667 | 15 | 1.6501 |
| 1.8375 | 2.0 | 18 | 1.6471 |
| 1.7018 | 2.3333 | 21 | 1.6587 |
| 1.398 | 2.6667 | 24 | 1.6508 |
| 1.6955 | 3.0 | 27 | 1.6577 |
| 1.4222 | 3.3333 | 30 | 1.6812 |
| 1.264 | 3.6667 | 33 | 1.6664 |
| 1.4261 | 4.0 | 36 | 1.6827 |
| 1.2406 | 4.3333 | 39 | 1.7099 |
| 1.2105 | 4.6667 | 42 | 1.7099 |
| 1.3733 | 5.0 | 45 | 1.7162 |
| 1.2441 | 5.3333 | 48 | 1.7490 |
| 1.1755 | 5.6667 | 51 | 1.7440 |
| 1.2253 | 6.0 | 54 | 1.7394 |
| 1.1223 | 6.3333 | 57 | 1.7542 |
| 1.1837 | 6.6667 | 60 | 1.7679 |
| 0.9838 | 7.0 | 63 | 1.7670 |
| 1.1613 | 7.3333 | 66 | 1.7693 |
| 1.1775 | 7.6667 | 69 | 1.7753 |
| 0.8999 | 8.0 | 72 | 1.7796 |
| 1.1617 | 8.3333 | 75 | 1.7813 |
| 1.1119 | 8.6667 | 78 | 1.7819 |
| 1.1191 | 9.0 | 81 | 1.7825 |
| 1.0606 | 9.3333 | 84 | 1.7821 |
| 1.1476 | 9.6667 | 87 | 1.7820 |
| 1.0837 | 10.0 | 90 | 1.7822 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
pfr/conditional-utilitarian-deberta-01 | pfr | "2022-10-17T19:09:02Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"deberta-v3",
"arxiv:2008.02275",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-09-27T18:52:39Z" | ---
tags:
- deberta-v3
inference:
parameters:
function_to_apply: "none"
widget:
- text: "I cuddled with my dog today."
---
# Conditional Utilitarian Deberta 01
## Model description
This is a [Deberta-based](https://huggingface.co/microsoft/deberta-v3-large) model. It was first fine-tuned on for computing utility estimates of experiences (see [utilitarian-deberta-01](https://huggingface.co/pfr/utilitarian-deberta-01). It was then further fine-tuned on 160 examples of pairwise comparisons of conditional utilities.
## Intended use
The main use case is the computation of utility estimates of first-person text scenarios, under extra contextual information.
## Limitations
The model was fine-tuned on only 160 examples, so it should be expected to have limited performance.
Further, while the base model was trained on ~10000 examples, they are still restricted, and only on first-person sentences. It does not have the capability of interpreting highly complex or unusual scenarios, and it does not have hard guarantees on its domain of accuracy.
## How to use
Given a scenario S under a context C, and the model U, one computes the estimated conditional utility with `U(f'{C} {S}') - U(C)`.
## Training data
The first training data is the train split from the Utilitarianism part of the [ETHICS dataset](https://arxiv.org/abs/2008.02275).
The second training data consists of 160 crowdsourced examples of triples (S, C0, C1) consisting of one scenario and two possible contexts, where `U(S | C0) > U(S | C1)`.
## Training procedure
Starting from [utilitarian-deberta-01](https://huggingface.co/pfr/utilitarian-deberta-01), we fine-tune the model over the training data of 160 examples, with a learning rate of `1e-5`, a batch size of `8`, and for 2 epochs.
## Evaluation results
The model achieves ~80% accuracy over 40 crowdsourced examples, from the same distribution as the training data. |
FounderOfHuggingface/gpt2_lora_r4_e2e_nlg_t300_e5_member_shadow15 | FounderOfHuggingface | "2024-01-19T03:58:39Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2024-01-19T03:58:38Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
YYBL1020/M2LLM_DRL_UAV | YYBL1020 | "2025-04-01T17:13:48Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-04-01T16:45:43Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/YYBL1020/M2LLM_DRL_UAV/18ee1c5250a2bdd32ce9a5d0ecf5a38e9151a9f2/README.md?%2FYYBL1020%2FM2LLM_DRL_UAV%2Fresolve%2Fmain%2FREADME.md=&etag=%227b95401dc46245ac339fc25059d4a56d90b4cde5%22 |
DBangshu/gemma_e5_1_0 | DBangshu | "2024-06-23T13:01:48Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-23T12:59:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
LHRuig/elioselialdersn | LHRuig | "2025-02-18T10:16:36Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-02-18T10:15:32Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: elioselialdersn
---
# elioselialdersn
<Gallery />
## Model description
elioselialdersn lora
## Trigger words
You should use `elioselialdersn` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/elioselialdersn/tree/main) them in the Files & versions tab.
|
stablediffusionapi/rupemixanime | stablediffusionapi | "2024-02-15T08:19:44Z" | 25 | 1 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-02-15T08:17:29Z" | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# rupeMix_anime API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "rupemixanime"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/rupemixanime)
Model link: [View model](https://modelslab.com/models/rupemixanime)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "rupemixanime",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
asafi/Meta-Llama-3-medical-8B-merged | asafi | "2024-06-30T20:34:37Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-30T20:29:22Z" | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rowankwang/Llama-3.3-70B-Instruct-Reference-uhc_ceo_assassination_82009-516fca06 | rowankwang | "2025-01-28T04:33:17Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | "2025-01-28T04:31:09Z" | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.12.0ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
Robzy/random-genre | Robzy | "2024-10-15T14:20:51Z" | 6 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"music-tagging",
"audio-classification",
"license:mit",
"region:us"
] | audio-classification | "2024-10-11T10:09:55Z" | ---
license: mit
pipeline_tag: audio-classification
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- music-tagging
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://huggingface.co/Robzy/random-genre
- Docs: [More Information Needed] |
Subsets and Splits