Search is not available for this dataset
modelId
stringlengths 5
137
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-03-29 00:36:43
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 401
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-03-29 00:36:28
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
adityarra07/whisper-med-LoRA_noise_128_128_3k | adityarra07 | "2024-04-02T19:12:33Z" | 0 | 0 | null | [
"generated_from_trainer",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"region:us"
] | null | "2024-04-02T17:31:57Z" | ---
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-medium-LoRA_nosie_aug_128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-LoRA_nosie_aug_128
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6571
- Wer: 23.9693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 3.0441 | 1.0 | 188 | 2.0325 | 37.2963 |
| 1.1676 | 2.0 | 376 | 0.7900 | 39.0221 |
| 0.6457 | 3.0 | 564 | 0.7016 | 34.6117 |
| 0.5808 | 4.0 | 752 | 0.6675 | 23.8734 |
| 0.552 | 5.0 | 940 | 0.6571 | 23.9693 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Asheyy/dogbooth | Asheyy | "2023-09-20T10:01:40Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-09-17T09:20:41Z" |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a photo of [v]dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Asheyy/dogbooth
This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
cutelemonlili/llama3.2_3b_NuminaMath-CoT_100k_MATH_training_Qwen_QwQ_32B_Preview | cutelemonlili | "2024-12-29T13:54:30Z" | 138 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:TaiGary/llama3.2_3b_NuminaMath-CoT_100k",
"base_model:finetune:TaiGary/llama3.2_3b_NuminaMath-CoT_100k",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-29T13:52:57Z" | ---
library_name: transformers
license: other
base_model: TaiGary/llama3.2_3b_NuminaMath-CoT_100k
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: MATH_training_Qwen_QwQ_32B_Preview
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MATH_training_Qwen_QwQ_32B_Preview
This model is a fine-tuned version of [TaiGary/llama3.2_3b_NuminaMath-CoT_100k](https://huggingface.co/TaiGary/llama3.2_3b_NuminaMath-CoT_100k) on the MATH_training_Qwen_QwQ_32B_Preview dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 8
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.4352 | 0.2999 | 200 | 0.4800 |
| 0.4569 | 0.5997 | 400 | 0.4496 |
| 0.4985 | 0.8996 | 600 | 0.4304 |
| 0.3238 | 1.1994 | 800 | 0.4392 |
| 0.2457 | 1.4993 | 1000 | 0.4288 |
| 0.2077 | 1.7991 | 1200 | 0.4263 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
yodayo-ai/nephra_v1.0 | yodayo-ai | "2024-06-19T18:13:02Z" | 192 | 20 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"en",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-17T21:26:57Z" | ---
license: llama3
language:
- en
base_model: meta-llama/Meta-Llama-3-8B
---
## Overview
**nephra v1** is primarily a model built for roleplaying sessions, trained on roleplay and instruction-style datasets.
## Model Details
- **Developed by**: [Sao10K](https://huggingface.co/Sao10K)
- **Model type**: Text-based Large Language Model
- **License**: [Meta Llama 3 Community License Agreement](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE)
- **Finetuned from model**: [Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
## Inference Guidelines
```python
import transformers
import torch
model_id = "yodayo-ai/nephra_v1.0"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are to play the role of a cheerful assistant."},
{"role": "user", "content": "Hi there, how's your day?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
outputs = pipeline(
prompt,
max_new_tokens=512,
eos_token_id=[
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>"),
pipeline.tokenizer.eos_token_id,
],
do_sample=True,
temperature=1.12,
min_p=0.075,
)
print(outputs[0]["generated_text"][len(prompt):])
```
### Recommended Settings
To guide the model to generate high-quality responses, here are the ideal settings:
```
Prompt Format: Same Prompt Format as Llama-3-Instruct
Temperature - 1.12
min-p: 0.075
Repetition Penalty: 1.1
Custom Stopping Strings: "\n{{user}}", "<" , "```" , -> Has occasional broken generations.
```
## License
Nephra v1 falls under [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE).
|
JoshELambert/markets | JoshELambert | "2023-07-07T15:53:15Z" | 6 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2023-07-07T15:12:22Z" | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# /var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmp44tdhqys/JoshELambert/markets
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/8x/qp375g154zg3h3ktpt_8tyqw0000gn/T/tmp44tdhqys/JoshELambert/markets")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
menutp/profanity-fr-65ac | menutp | "2024-01-25T22:52:09Z" | 92 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"license:wtfpl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-25T22:19:22Z" | ---
license: wtfpl
---
this model was trained on 95% of my dataset menutp/hate_speech-fr_mini and validated on the last 5% it reached 65% accuracy.
labels are to be interpreted as follow :
```
{
0: "neutral",
1: "toxic",
2: "severe toxic",
3: "obscene",
4: "threat",
5: "insult",
6: "identity hate",
7: "generally_offensive"
}
```
due to the lack of training data for the label between 1 and 6 only label 0 and 7 are to be trusted |
mradermacher/MopeyMule-Blackroot-8B-i1-GGUF | mradermacher | "2024-12-16T02:37:23Z" | 39 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B",
"base_model:quantized:Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-06-14T05:25:28Z" | ---
base_model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MopeyMule-Blackroot-8B-i1-GGUF/resolve/main/MopeyMule-Blackroot-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lesso/376486a9-a13d-4285-837d-92a603f6e5de | lesso | "2025-02-09T00:13:12Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mixtral",
"axolotl",
"generated_from_trainer",
"base_model:TitanML/tiny-mixtral",
"base_model:adapter:TitanML/tiny-mixtral",
"region:us"
] | null | "2025-02-07T09:20:25Z" | ---
library_name: peft
base_model: TitanML/tiny-mixtral
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 376486a9-a13d-4285-837d-92a603f6e5de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 376486a9-a13d-4285-837d-92a603f6e5de
This model is a fine-tuned version of [TitanML/tiny-mixtral](https://huggingface.co/TitanML/tiny-mixtral) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.5250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001006
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.5357 | 0.0012 | 1 | 10.5768 |
| 9.0359 | 0.0593 | 50 | 8.9945 |
| 9.412 | 0.1186 | 100 | 8.7903 |
| 9.381 | 0.1778 | 150 | 8.6029 |
| 8.9243 | 0.2371 | 200 | 8.5250 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lrios/sdxl-lora-solana-logo | lrios | "2025-02-21T20:14:44Z" | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2025-02-21T18:42:42Z" | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - lrios/sdxl-lora-solana-logo
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the None dataset. You can find some example images in the following.




LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
vedalken/ML6-interview | vedalken | "2025-02-18T08:00:49Z" | 1 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"en",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-17T16:54:10Z" | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
base_model:
- FacebookAI/roberta-base
pipeline_tag: text-classification
library_name: transformers
---
Roberta-base trained on the dataset for job description classification.
Repo of the interview: https://bitbucket.org/ml6team/challenge-classify-job-descriptions.git/src
For the training code just send me a message or comment in this repo. |
buddhilive/roberta-finetuned-squad | buddhilive | "2023-09-09T19:44:51Z" | 59 | 0 | transformers | [
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-09-09T19:05:33Z" | ---
license: mit
base_model: roberta-base
tags:
- generated_from_keras_callback
model-index:
- name: buddhilive/roberta-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# buddhilive/roberta-finetuned-squad
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6953
- Validation Loss: 0.0
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.2629 | 0.0 | 0 |
| 0.7907 | 0.0 | 1 |
| 0.6953 | 0.0 | 2 |
### Framework versions
- Transformers 4.33.1
- TensorFlow 2.13.0
- Datasets 2.14.5
- Tokenizers 0.13.3
|
nmcco/18-qwen-1gpu | nmcco | "2025-03-18T21:10:12Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:nmcco/qwen-2.5-3b-speakertokens",
"base_model:finetune:nmcco/qwen-2.5-3b-speakertokens",
"endpoints_compatible",
"region:us"
] | null | "2025-03-18T20:10:25Z" | ---
base_model: nmcco/qwen-2.5-3b-speakertokens
library_name: transformers
model_name: 18-qwen-1gpu
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 18-qwen-1gpu
This model is a fine-tuned version of [nmcco/qwen-2.5-3b-speakertokens](https://huggingface.co/nmcco/qwen-2.5-3b-speakertokens).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nmcco/18-qwen-1gpu", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hwerzog-huh/huggingface/runs/wyfma7e8)
This model was trained with SFT.
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.2
- Pytorch: 2.4.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
tronsdds/Qwen-Qwen1.5-0.5B-1724517284 | tronsdds | "2024-08-24T16:35:23Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | "2024-08-24T16:34:44Z" | ---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
mradermacher/MiquTotalMix-70B-i1-GGUF | mradermacher | "2024-05-06T06:03:19Z" | 17 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Undi95/MiquTotalMix-70B",
"base_model:quantized:Undi95/MiquTotalMix-70B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-03-22T22:36:45Z" | ---
base_model: Undi95/MiquTotalMix-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
weighted/imatrix quants of https://huggingface.co/Undi95/MiquTotalMix-70B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MiquTotalMix-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 18.7 | |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 20.8 | |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 23.7 | |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-Q2_K.gguf) | i1-Q2_K | 25.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 28.6 | |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 30.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 30.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 31.4 | |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 33.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 36.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 39.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 39.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 41.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 47.9 | |
| [GGUF](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 49.2 | |
| [PART 1](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/MiquTotalMix-70B-i1-GGUF/resolve/main/MiquTotalMix-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 57.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
daveempire/joker_teller | daveempire | "2025-03-26T15:45:11Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2025-03-26T15:45:11Z" | ---
license: creativeml-openrail-m
---
|
SmallDoge/qwen7b_instruct_shortcot_smallthoughts25k | SmallDoge | "2025-03-17T06:14:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-17T05:52:52Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
havinash-ai/377d8250-51d3-4829-b83f-26d1ea9dfb50 | havinash-ai | "2025-01-17T00:23:34Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] | null | "2025-01-17T00:21:56Z" | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-128k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 377d8250-51d3-4829-b83f-26d1ea9dfb50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-128k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 328d72eb04626276_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/328d72eb04626276_train_data.json
type:
field_input: history_data
field_instruction: current_conv_data
field_output: response_supporter
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/377d8250-51d3-4829-b83f-26d1ea9dfb50
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/328d72eb04626276_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8b7ad9ea-6630-4a3b-83b5-7c7c26a52fac
wandb_project: Mine-SN56-2-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8b7ad9ea-6630-4a3b-83b5-7c7c26a52fac
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 377d8250-51d3-4829-b83f-26d1ea9dfb50
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.9583 | 0.0008 | 1 | 0.9906 |
| 3.8546 | 0.0023 | 3 | 0.9854 |
| 3.3839 | 0.0046 | 6 | 0.9354 |
| 3.3853 | 0.0069 | 9 | 0.7519 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
laquythang/8b0b95a8-cd27-4243-8419-09768a319970 | laquythang | "2025-01-13T21:30:36Z" | 12 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B",
"base_model:adapter:unsloth/SmolLM2-1.7B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-13T21:21:28Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8b0b95a8-cd27-4243-8419-09768a319970
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f58909f3ef8c0453_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f58909f3ef8c0453_train_data.json
type:
field_input: caption
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: laquythang/8b0b95a8-cd27-4243-8419-09768a319970
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/f58909f3ef8c0453_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 773751f8-2eac-44c5-b605-ae427895e535
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 773751f8-2eac-44c5-b605-ae427895e535
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 8b0b95a8-cd27-4243-8419-09768a319970
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B](https://huggingface.co/unsloth/SmolLM2-1.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4695
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.506 | 0.2905 | 200 | 0.4695 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tousek/llama_3_Mike-unsloth.Q4-K-M.gguf | tousek | "2024-06-17T14:49:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-06-17T14:48:52Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** tousek
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pt-sk/VAE | pt-sk | "2024-05-27T14:17:31Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2024-05-26T09:01:13Z" | ---
license: mit
---
1. Encoder_Decoder_1_EPOCH and Encoder_Decoder_2_EPOCH trained on celebA HQ resized 256 x 256 dataset for 2 epochs(30000 samples)
2. used 2nd epoch check point to further train on face hq dataset resized 256 x 256 dataset for 1 epochs(70000 samples) |
mamun4105/q-Taxi-v3 | mamun4105 | "2023-06-13T05:25:55Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-06-12T18:57:07Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mamun4105/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
smp-hub/resnext101_32x32d.instagram | smp-hub | "2025-01-15T17:52:46Z" | 10 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"image-classification",
"pytorch",
"resnext",
"license:other",
"region:us"
] | image-classification | "2025-01-14T22:47:00Z" |
---
library_name: segmentation-models-pytorch
license: other
pipeline_tag: image-classification
tags:
- segmentation-models-pytorch
- image-classification
- pytorch
- resnext
languages:
- python
---
# Model card for resnext101_32x32d.
This repository contains the `instagram` pre-trained weights for the `resnext101_32x32d` model used as
encoder in the [segmentation-models-pytorch](https://github.com/qubvel-org/segmentation_models.pytorch) library.
### Example usage:
1. Install the library:
```bash
pip install segmentation-models-pytorch
```
2. Use the encoder in your code:
```python
import segmentation_models_pytorch as smp
model = smp.Unet("resnext101_32x32d", encoder_weights="instagram")
```
### References
- Github: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
- Original weights URL: https://download.pytorch.org/models/ig_resnext101_32x32-e4b90b00.pth
|
lesso03/234fdb1b-7ef8-41a4-8274-e4be1e14f795 | lesso03 | "2025-01-03T03:00:49Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-7b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-7b-128k",
"region:us"
] | null | "2025-01-03T01:42:22Z" | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-7b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 234fdb1b-7ef8-41a4-8274-e4be1e14f795
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-7b-128k
bf16: true
chat_template: llama3
datasets:
- data_files:
- b1e346db6e3ba891_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b1e346db6e3ba891_train_data.json
type:
field_input: input
field_instruction: task
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: false
hub_model_id: lesso03/234fdb1b-7ef8-41a4-8274-e4be1e14f795
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 1.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 30
micro_batch_size: 4
mlflow_experiment_name: /tmp/b1e346db6e3ba891_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 20
save_strategy: steps
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 234fdb1b-7ef8-41a4-8274-e4be1e14f795
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 234fdb1b-7ef8-41a4-8274-e4be1e14f795
warmup_steps: 5
weight_decay: 0.01
xformers_attention: false
```
</details><br>
# 234fdb1b-7ef8-41a4-8274-e4be1e14f795
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-7b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-7b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.101 | 0.0001 | 1 | 5.3590 |
| 17.7617 | 0.0003 | 4 | 5.3538 |
| 7.2224 | 0.0006 | 8 | 5.2922 |
| 5.7201 | 0.0009 | 12 | 5.2118 |
| 7.3831 | 0.0011 | 16 | 5.1252 |
| 8.9314 | 0.0014 | 20 | 5.0581 |
| 9.4619 | 0.0017 | 24 | 5.0237 |
| 10.3213 | 0.0020 | 28 | 5.0115 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
N1ch0/chungli-ao-xlm-roberta-news | N1ch0 | "2025-01-13T12:17:59Z" | 165 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-12-09T14:51:07Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
akrishnan/gpt2-20M-unlearning-supersampled_biographies_x10_lr_0.0005_seed_123 | akrishnan | "2025-02-24T17:16:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-24T17:12:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TitanML/Model-Cache | TitanML | "2024-04-24T10:02:00Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-24T10:02:00Z" | # AWQ Cache
Mapping HF model names to the names as saved in the TitanML directory. |
lesso05/b83b4218-94d4-4b1b-b937-2b2afe93b724 | lesso05 | "2025-01-13T07:28:04Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-13T05:58:21Z" | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b83b4218-94d4-4b1b-b937-2b2afe93b724
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3-mini-4k-instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- dcb1f5719033089d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/dcb1f5719033089d_train_data.json
type:
field_instruction: query
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: lesso05/b83b4218-94d4-4b1b-b937-2b2afe93b724
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 25
micro_batch_size: 2
mlflow_experiment_name: /tmp/dcb1f5719033089d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1e37225f-b010-4451-a04e-631b19f1656c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1e37225f-b010-4451-a04e-631b19f1656c
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b83b4218-94d4-4b1b-b937-2b2afe93b724
This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0002 | 5 | nan |
| 0.0 | 0.0004 | 10 | nan |
| 0.0 | 0.0005 | 15 | nan |
| 0.0 | 0.0007 | 20 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gokulsrinivasagan/bert_base_lda_20_v1_rte | gokulsrinivasagan | "2024-12-04T12:50:37Z" | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/bert_base_lda_20_v1",
"base_model:finetune:gokulsrinivasagan/bert_base_lda_20_v1",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-26T19:23:31Z" | ---
library_name: transformers
language:
- en
base_model: gokulsrinivasagan/bert_base_lda_20_v1
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert_base_lda_20_v1_rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5270758122743683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_20_v1_rte
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_20_v1](https://huggingface.co/gokulsrinivasagan/bert_base_lda_20_v1) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6937
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7289 | 1.0 | 10 | 0.7114 | 0.5415 |
| 0.7107 | 2.0 | 20 | 0.7287 | 0.4729 |
| 0.7029 | 3.0 | 30 | 0.6961 | 0.4693 |
| 0.6982 | 4.0 | 40 | 0.6937 | 0.5271 |
| 0.6891 | 5.0 | 50 | 0.7067 | 0.5343 |
| 0.6843 | 6.0 | 60 | 0.7118 | 0.5090 |
| 0.6558 | 7.0 | 70 | 0.7532 | 0.5054 |
| 0.5694 | 8.0 | 80 | 0.9040 | 0.4982 |
| 0.4815 | 9.0 | 90 | 1.0390 | 0.4549 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
megajajo/phi-1_5-finetuned | megajajo | "2024-05-04T19:50:04Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"region:us"
] | null | "2024-05-03T20:49:37Z" | ---
library_name: peft
base_model: microsoft/phi-1_5
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
alexeyGod/Test23 | alexeyGod | "2025-03-14T03:15:07Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] | text-to-image | "2025-03-14T03:12:37Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: 'man TomBig, underwear,he is cooking '
parameters:
negative_prompt: >-
ng_deepnegative_v1_75t,(badhandv4:1.2),EasyNegative,(worst
quality:2),,nsfw
output:
url: >-
images/d902c4c08a06f43ba91a0728ebae4b0b846bbaae326d9b73d632f6ca92d0466f.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: TomBig
license: apache-2.0
---
# ggg
<Gallery />
## Trigger words
You should use `TomBig` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/alexeyGod/Test23/tree/main) them in the Files & versions tab.
|
RedSparkie/PaulaPua | RedSparkie | "2025-03-06T00:04:28Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"migrated",
"photorealistic",
"sexy",
"woman",
"celebrity",
"realistic",
"influencer",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-05T23:55:35Z" | ---
license: other
license_name: bespoke-lora-trained-license
license_link: >-
https://multimodal.art/civitai-licenses?allowNoCredit=True&allowCommercialUse=RentCivit&allowDerivatives=True&allowDifferentLicense=True
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
- migrated
- photorealistic
- sexy
- woman
- celebrity
- realistic
- influencer
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: PaulaPua
widget:
- text: >-
Posing for a LinkedIn profile picture. Expensive, black suit with white
shirt. Red lipstick. Platinum blond hair.
output:
url: images/example_sq5sxnbnr.png
---
[](https://ko-fi.com/redsparkie) |
lesso01/4fdc1db0-1091-49b7-aab5-4fce516ab3f9 | lesso01 | "2025-02-16T00:33:03Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"region:us"
] | null | "2025-02-15T23:55:29Z" | ---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4fdc1db0-1091-49b7-aab5-4fce516ab3f9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 4fdc1db0-1091-49b7-aab5-4fce516ab3f9
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1539
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000201
- train_batch_size: 4
- eval_batch_size: 4
- seed: 10
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 2.9383 |
| 4.6909 | 0.0014 | 50 | 2.4425 |
| 4.3 | 0.0027 | 100 | 2.3725 |
| 4.2006 | 0.0041 | 150 | 2.3445 |
| 4.0855 | 0.0054 | 200 | 2.3015 |
| 4.2416 | 0.0068 | 250 | 2.3096 |
| 4.1256 | 0.0081 | 300 | 2.2099 |
| 3.9752 | 0.0095 | 350 | 2.1886 |
| 4.1878 | 0.0108 | 400 | 2.1623 |
| 3.9657 | 0.0122 | 450 | 2.1546 |
| 4.0042 | 0.0135 | 500 | 2.1539 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
iManay/fil-bic-mt5b | iManay | "2024-05-29T06:05:11Z" | 7 | 0 | transformers | [
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"base_model:google/mt5-base",
"base_model:finetune:google/mt5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-05-27T03:29:19Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
base_model: google/mt5-base
model-index:
- name: fil-bic-mt5b
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# fil-bic-mt5b
This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2155
- Validation Loss: 2.4691
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.8782 | 3.0764 | 0 |
| 3.0550 | 2.4897 | 1 |
| 2.4361 | 2.1678 | 2 |
| 2.0424 | 1.9982 | 3 |
| 1.7454 | 1.8860 | 4 |
| 1.5089 | 1.8502 | 5 |
| 1.3137 | 1.8682 | 6 |
| 1.1487 | 1.8252 | 7 |
| 0.9793 | 1.8486 | 8 |
| 0.8428 | 1.8815 | 9 |
| 0.7107 | 1.9716 | 10 |
| 0.6015 | 2.0271 | 11 |
| 0.5106 | 2.1239 | 12 |
| 0.4392 | 2.2095 | 13 |
| 0.3786 | 2.2045 | 14 |
| 0.3315 | 2.2884 | 15 |
| 0.2882 | 2.3419 | 16 |
| 0.2587 | 2.4029 | 17 |
| 0.2354 | 2.4173 | 18 |
| 0.2155 | 2.4691 | 19 |
### Framework versions
- Transformers 4.37.2
- TensorFlow 2.15.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Larry2028/cn | Larry2028 | "2024-02-07T12:28:42Z" | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | "2024-02-07T12:28:42Z" | ---
license: bigscience-openrail-m
---
|
johnpaulbin/meme-titles | johnpaulbin | "2023-06-28T00:05:35Z" | 178 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | Trained on ~400 youtube titles of meme compilations on youtube.
WARNING: may produce offensive content. |
Sneka/mistral-json-finetune | Sneka | "2023-11-21T13:03:58Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-21T12:17:50Z" | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: mistral-json-finetune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-json-finetune
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2986 | 0.57 | 50 | 0.6897 |
| 0.4469 | 1.14 | 100 | 0.4239 |
| 0.2978 | 1.71 | 150 | 0.3497 |
| 0.2279 | 2.29 | 200 | 0.3126 |
| 0.1869 | 2.86 | 250 | 0.2858 |
| 0.137 | 3.43 | 300 | 0.2848 |
| 0.1221 | 4.0 | 350 | 0.2713 |
| 0.0918 | 4.57 | 400 | 0.2750 |
| 0.0832 | 5.14 | 450 | 0.2781 |
| 0.0681 | 5.71 | 500 | 0.2807 |
| 0.055 | 6.29 | 550 | 0.2902 |
| 0.0515 | 6.86 | 600 | 0.2746 |
| 0.0463 | 7.43 | 650 | 0.3078 |
| 0.0421 | 8.0 | 700 | 0.2971 |
| 0.0354 | 8.57 | 750 | 0.3196 |
| 0.0276 | 9.14 | 800 | 0.3490 |
| 0.0281 | 9.71 | 850 | 0.3411 |
| 0.026 | 10.29 | 900 | 0.3658 |
| 0.0241 | 10.86 | 950 | 0.3661 |
| 0.0209 | 11.43 | 1000 | 0.3730 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
AstraMindAI/xttsv2 | AstraMindAI | "2024-12-04T15:10:04Z" | 3,806 | 3 | null | [
"xtts",
"custom_code",
"base_model:coqui/XTTS-v2",
"base_model:finetune:coqui/XTTS-v2",
"license:apache-2.0",
"region:us"
] | null | "2024-10-24T09:46:49Z" | ---
license: apache-2.0
base_model:
- coqui/XTTS-v2
---
# Auralis 🌌
## Model Details 🛠️
**Model Name:** Auralis
**Model Architecture:** Based on [Coqui XTTS-v2](https://huggingface.co/coqui/XTTS-v2)
**License:**
- license: Apache 2.0
- base_model: XTTS-v2 Components [Coqui AI License](https://coqui.ai/cpml)
**Language Support:** English, Spanish, French, German, Italian, Portuguese, Polish, Turkish, Russian, Dutch, Czech, Arabic, Chinese (Simplified), Hungarian, Korean, Japanese, Hindi
**Developed by:** [AstraMind.ai](https://www.astramind.ai)
**GitHub:** [AstraMind AI](https://github.com/astramind-ai/Auralis/tree/main)
**Primary Use Case:** Text-to-Speech (TTS) generation for real-world applications, including books, dialogues, and multilingual tasks.
---
## Model Description 🚀
Auralis transforms text into natural, high-quality speech with exceptional speed and scalability. It is powered by [Coqui XTTS-v2](https://huggingface.co/coqui/XTTS-v2) and optimized for both consumer-grade and high-performance GPUs. Auralis is designed to meet real-world needs like long-text processing, voice cloning, and concurrent request handling.
### Key Features:
- **Warp-Speed Processing:** Generate speech for an entire novel (e.g., Harry Potter) in ~10 minutes.
- **Hardware Friendly:** Requires <10GB VRAM on a single NVIDIA RTX 3090.
- **Scalable:** Handles multiple requests simultaneously.
- **Streaming:** Seamlessly processes long texts in a streaming format.
- **Custom Voices:** Enables voice cloning from short reference audio.
---
## Quick Start ⭐
```python
from auralis import TTS, TTSRequest
# Initialize the model
tts = TTS().from_pretrained("AstraMindAI/xtts2-gpt")
# Create a TTS request
request = TTSRequest(
text="Hello Earth! This is Auralis speaking.",
speaker_files=["reference.wav"]
)
# Generate speech
output = tts.generate_speech(request)
output.save("output.wav")
```
---
## Ebook Generation 📚
Auralis converting ebooks into audio formats at lightning speed. For Python script, check out [ebook_audio_generator.py](https://github.com/astramind-ai/Auralis/blob/main/examples/vocalize_a_ebook.py).
```python
def process_book(chapter_file: str, speaker_file: str):
# Read chapter
with open(chapter_file, 'r') as f:
chapter = f.read()
# You can pass the whole book, auralis will take care of splitting
request = TTSRequest(
text=chapter,
speaker_files=[speaker_file],
audio_config=AudioPreprocessingConfig(
enhance_speech=True,
normalize=True
)
)
output = tts.generate_speech(request)
output.play()
output.save("chapter_output.wav")
# Example usage
process_book("chapter1.txt", "reference_voice.wav")
```
---
## Intended Use 🌟
Auralis is designed for:
- **Content Creators:** Generate audiobooks, podcasts, or voiceovers.
- **Developers:** Integrate TTS into applications via a simple Python API.
- **Accessibility**: Providing audio versions of digital content for people with visual or reading difficulties.
- **Multilingual Scenarios:** Convert text to speech in multiple supported languages.
---
## Performance 📊
**Benchmarks on NVIDIA RTX 3090:**
- Short phrases (<100 characters): ~1 second
- Medium texts (<1,000 characters): ~5-10 seconds
- Full books (~100,000 characters): ~10 minutes
**Memory Usage:**
- Base VRAM: ~4GB
- Peak VRAM: ~10GB
---
## Model Features 🛸
1. **Speed & Efficiency:**
- Smart batching for rapid processing of long texts.
- Memory-optimized for consumer GPUs.
2. **Easy Integration:**
- Python API with support for synchronous and asynchronous workflows.
- Streaming mode for continuous playback during generation.
3. **Audio Quality Enhancements:**
- Background noise reduction.
- Voice clarity and volume normalization.
- Customizable audio preprocessing.
4. **Multilingual Support:**
- Automatic language detection.
- High-quality speech in 15+ languages.
5. **Customization:**
- Voice cloning using short reference clips.
- Adjustable parameters for tone, pacing, and language.
---
## Limitations & Ethical Considerations ⚠️
- **Voice Cloning Risks:** Auralis supports voice cloning, which may raise ethical concerns about misuse. Use responsibly and ensure proper consent.
- **Accent Limitations:** While robust for many languages, accents and intonations may vary based on the input.
---
## Citation 📜
If you use Auralis in your research or projects, please cite:
```bibtex
@misc{auralis2024,
author = {AstraMind AI},
title = {Auralis: High-Performance Text-to-Speech Engine},
year = {2024},
url = {https://huggingface.co/AstraMindAI/auralis}
}
``` |
kostiantynk1205/f41114b4-9f12-491d-bf4d-0b187dba056c | kostiantynk1205 | "2025-02-01T00:09:21Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b",
"base_model:adapter:EleutherAI/pythia-1b",
"license:apache-2.0",
"region:us"
] | null | "2025-01-31T23:49:46Z" | ---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f41114b4-9f12-491d-bf4d-0b187dba056c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/pythia-1b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 774eb1c112fe248e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/774eb1c112fe248e_train_data.json
type:
field_input: ''
field_instruction: answer
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/f41114b4-9f12-491d-bf4d-0b187dba056c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/774eb1c112fe248e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 27bd5a7b-564b-47ce-bf71-ab73f604ca9e
wandb_project: Birthday-SN56-23-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 27bd5a7b-564b-47ce-bf71-ab73f604ca9e
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f41114b4-9f12-491d-bf4d-0b187dba056c
This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4651
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 4.0608 |
| 10.5989 | 0.0007 | 50 | 2.5510 |
| 9.3417 | 0.0014 | 100 | 2.4980 |
| 10.5138 | 0.0021 | 150 | 2.4703 |
| 9.6977 | 0.0028 | 200 | 2.4651 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dah1214/faset-perf-whisper-medium-tw | dah1214 | "2024-02-20T17:36:35Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"region:us"
] | null | "2024-02-19T02:56:46Z" | ---
library_name: peft
base_model: openai/whisper-large-v2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
huggingtweets/pfrazee | huggingtweets | "2021-05-22T18:33:55Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/pfrazee/1617856674100/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1302310345411887107/0LpHs9Rd_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">Paul Frazee✌️ 🤖 AI Bot </div>
<div style="font-size: 15px">@pfrazee bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@pfrazee's tweets](https://twitter.com/pfrazee).
| Data | Quantity |
| --- | --- |
| Tweets downloaded | 3238 |
| Retweets | 334 |
| Short tweets | 237 |
| Tweets kept | 2667 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/134luj87/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @pfrazee's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ajilr7t) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ajilr7t/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/pfrazee')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Arkana08/LexiMaid-L3-8B | Arkana08 | "2024-10-15T07:46:20Z" | 10 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-13T05:04:14Z" | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
---
### (GGUF) Thanks:
#### mradermacher
- **GGUF:** **[mradermacher/LexiMaid-L3-8B-GGUF](https://huggingface.co/mradermacher/LexiMaid-L3-8B-GGUF)**
- **imatrix GGUF:** **[mradermacher/LexiMaid-L3-8B-i1-GGUF](https://huggingface.co/mradermacher/LexiMaid-L3-8B-i1-GGUF)**
---
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
---
**Leximaid** - This experimental model was created based on discussions on Reddit, where models such as **[Lumimaid-v0.2-8B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B)**, **[Dusk_Rainbow](https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow)**, **[Stheno-v3.4](https://huggingface.co/Sao10K/Llama-3.1-8B-Stheno-v3.4)**, **[Celeste-V1.5](https://huggingface.co/nothingiisreal/L3.1-8B-Celeste-V1.5)**, and **[Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2)** were mentioned — each with its own strengths and weaknesses. Some of them received criticism, which inspired me to create a new synthesis.
To merge the models, I used the **dare ties** method, first combining **[Lumimaid-v0.2-8B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B)**, **[Dusk_Rainbow](https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow)**, and **[Stheno-v3.4](https://huggingface.co/Sao10K/Llama-3.1-8B-Stheno-v3.4)**, and then mixing the resulting model with **[Celeste-V1.5](https://huggingface.co/nothingiisreal/L3.1-8B-Celeste-V1.5)** and **[Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2)**. **Leximaid** is focused on roleplay and creative storytelling.
---
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708).
### Models Merged
The following models were included in the merge:
**[Lumimaid-v0.2-8B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B)**
**[Dusk_Rainbow](https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow)**
**[Stheno-v3.4](https://huggingface.co/Sao10K/Llama-3.1-8B-Stheno-v3.4)**
**[Celeste-V1.5](https://huggingface.co/nothingiisreal/L3.1-8B-Celeste-V1.5)**
**[Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2)**
## Configuration
The following YAML configuration was used to produce Leximaid:
```yaml
models:
- model: Arkana08/Maxi-Fail-L3-8b
parameters:
weight: 0.4
density: 0.7
- model: nothingiisreal/L3.1-8B-Celeste-V1.5
parameters:
weight: 0.3
density: 0.75
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
parameters:
weight: 0.3
density: 0.65
merge_method: dare_ties
base_model: Arkana08/Maxi-Fail-L3-8b
parameters:
int8_mask: true
dtype: bfloat16
```
## Credits
Thanks to the creators of the models:
- **[NeverSleep/Lumimaid-v0.2-8B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B)**
- **[SicariusSicariiStuff/Dusk_Rainbow](https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow)**
- **[Sao10K/Llama-3.1-8B-Stheno-v3.4](https://huggingface.co/Sao10K/Llama-3.1-8B-Stheno-v3.4)**
- **[nothingiisreal/L3.1-8B-Celeste-V1.5](https://huggingface.co/nothingiisreal/L3.1-8B-Celeste-V1.5)**
- **[Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2)**
|
hkufyp2024/binary-stock-classifier | hkufyp2024 | "2024-04-22T09:25:49Z" | 112 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-22T09:21:42Z" | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: binary-stock-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# binary-stock-classifier
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1577
- Accuracy: 0.95
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 90 | 0.0497 | 1.0 |
| No log | 2.0 | 180 | 0.0138 | 1.0 |
| No log | 3.0 | 270 | 0.4034 | 0.9 |
| No log | 4.0 | 360 | 0.2492 | 0.95 |
| No log | 5.0 | 450 | 0.1985 | 0.95 |
| 0.1536 | 6.0 | 540 | 0.0427 | 0.95 |
| 0.1536 | 7.0 | 630 | 0.2490 | 0.95 |
| 0.1536 | 8.0 | 720 | 0.1568 | 0.95 |
| 0.1536 | 9.0 | 810 | 0.1577 | 0.95 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.15.2
|
mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF | mradermacher | "2024-06-26T14:38:50Z" | 246 | 1 | transformers | [
"transformers",
"gguf",
"ja",
"en",
"base_model:neoai-inc/Llama-3-neoAI-8B-Chat-v0.1",
"base_model:quantized:neoai-inc/Llama-3-neoAI-8B-Chat-v0.1",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-06-26T14:08:14Z" | ---
base_model: neoai-inc/Llama-3-neoAI-8B-Chat-v0.1
language:
- ja
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/neoai-inc/Llama-3-neoAI-8B-Chat-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-neoAI-8B-Chat-v0.1-GGUF/resolve/main/Llama-3-neoAI-8B-Chat-v0.1.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Dolphin-5.1-7b-i1-GGUF | mradermacher | "2025-01-24T12:08:07Z" | 575 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser",
"teknium/OpenHermes-2.5-Mistral-7B",
"en",
"base_model:Isotonic/Dolphin-5.1-7b",
"base_model:quantized:Isotonic/Dolphin-5.1-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2025-01-24T11:11:28Z" | ---
base_model: Isotonic/Dolphin-5.1-7b
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
- teknium/OpenHermes-2.5-Mistral-7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Isotonic/Dolphin-5.1-7b
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Dolphin-5.1-7b-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Dolphin-5.1-7b-i1-GGUF/resolve/main/Dolphin-5.1-7b.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ZeroWw/Yi-1.5-9B-32K-GGUF | ZeroWw | "2024-06-25T14:37:00Z" | 11 | 1 | null | [
"gguf",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-06-25T14:19:16Z" |
---
license: mit
language:
- en
---
My own (ZeroWw) quantizations.
output and embed tensors quantized to f16.
all other tensors quantized to q5_k or q6_k.
Result:
both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
and they perform as well as the pure f16.
|
TroyDoesAI/MermaidMoE-15B | TroyDoesAI | "2024-04-12T01:59:24Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-07T16:37:31Z" | ---
license: cc-by-4.0
---
THIS MODEL IS A SELF MERGE WITH NO FURTHER TRAINING FOR EVALUATION OF THE EFFECTS OF PRUNED MODELS MERGED TOGETHER IN A SELF MERGE. |
HunyStark/ppo-LunarLander-v2 | HunyStark | "2023-12-26T02:06:11Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-26T02:05:52Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.33 +/- 20.62
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BogdanTurbal/model_gpt2_medium_d_hate_bias_political_bias_ep_1_3_a_sqn_a_b_p_100_5_v_10 | BogdanTurbal | "2024-08-11T20:21:45Z" | 6 | 0 | null | [
"tensorboard",
"safetensors",
"gpt2",
"generated_from_trainer",
"base_model:BogdanTurbal/model_gpt2_medium_d_hate_bias_ep_1_sqn_a_p_100_v_10",
"base_model:finetune:BogdanTurbal/model_gpt2_medium_d_hate_bias_ep_1_sqn_a_p_100_v_10",
"license:mit",
"region:us"
] | null | "2024-08-11T19:49:58Z" | ---
license: mit
base_model: BogdanTurbal/model_gpt2_medium_d_hate_bias_ep_1_sqn_a_p_100_v_10
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model_gpt2_medium_d_hate_bias_political_bias_ep_1_3_a_sqn_a_b_p_100_5_v_10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_gpt2_medium_d_hate_bias_political_bias_ep_1_3_a_sqn_a_b_p_100_5_v_10
This model is a fine-tuned version of [BogdanTurbal/model_gpt2_medium_d_hate_bias_ep_1_sqn_a_p_100_v_10](https://huggingface.co/BogdanTurbal/model_gpt2_medium_d_hate_bias_ep_1_sqn_a_p_100_v_10) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8600
- Accuracy: 0.7045
- F1 Micro: 0.7045
- Auc: 0.7730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Micro | Auc |
|:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:------:|
| 0.7866 | 0.2703 | 10 | 0.6437 | 0.6306 | 0.6306 | 0.6997 |
| 0.6471 | 0.5405 | 20 | 0.6366 | 0.6452 | 0.6452 | 0.7367 |
| 0.6642 | 0.8108 | 30 | 0.5960 | 0.6787 | 0.6787 | 0.7630 |
| 0.622 | 1.0811 | 40 | 0.6366 | 0.6564 | 0.6564 | 0.7646 |
| 0.4354 | 1.3514 | 50 | 0.6507 | 0.6718 | 0.6718 | 0.7571 |
| 0.414 | 1.6216 | 60 | 0.6929 | 0.6804 | 0.6804 | 0.7575 |
| 0.4344 | 1.8919 | 70 | 0.6345 | 0.7010 | 0.7010 | 0.7744 |
| 0.2969 | 2.1622 | 80 | 0.6606 | 0.7045 | 0.7045 | 0.7788 |
| 0.2133 | 2.4324 | 90 | 0.7549 | 0.7027 | 0.7027 | 0.7746 |
| 0.1031 | 2.7027 | 100 | 0.8532 | 0.6942 | 0.6942 | 0.7743 |
| 0.1658 | 2.9730 | 110 | 0.8600 | 0.7045 | 0.7045 | 0.7730 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Shero448/EvelynZZZ-ilu | Shero448 | "2025-03-27T20:51:08Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:John6666/ilustrealmix-v20-sdxl",
"base_model:adapter:John6666/ilustrealmix-v20-sdxl",
"region:us"
] | text-to-image | "2025-03-27T20:50:55Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
masterpiece, best quality, in cafe, pizza on the table, chair, amazing
quality, very aesthetic, absurdres, best quality, 4k, upscale, EvelynZZZ,
1girl, solo, sitting, eating, yummy, looking at viewer, blonde hair, hair
bun, french braid, mole under eye, black choker, looking at viewer, purple
eyes, short hair, closed mouth, jewelry, white shirt, hair between eyes,
earrings, hair intakes, black jacket, large breasts, black gloves,
sleeveless shirt, yellow necktie, black pants, high-waist pants, collared
shirt, elbow gloves, thigh strap, bare shoulders, jacket on shoulders BREAK
heart <lora:EvelynZZZ_ILv5:0.8>
parameters:
negative_prompt: >-
pubic hair, 3d, cropped, loli, censored, (simple background:1.0), bar
censoring, mosaic censoring, siblings, bad legs, clones, monochrome,
clones, sketch, rought lines, patreon username, logo, bad proportions,
artist signature, artist name, english text, text, korean text, speech
bubble, sound effects, multiple views, 4koma,comics, comic,
output:
url: images/2025-02-28-175514_hassakuXLIllustrious_v13StyleA_448.png
base_model: John6666/ilustrealmix-v20-sdxl
instance_prompt: EvelynZZZ
---
# EvelynZZZ-ilu
<Gallery />
## Trigger words
You should use `EvelynZZZ` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Shero448/EvelynZZZ-ilu/tree/main) them in the Files & versions tab.
|
LyliaEngine/Sinozick_Style_XL_Pony | LyliaEngine | "2024-05-31T09:46:52Z" | 92 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:LyliaEngine/Pony_Diffusion_V6_XL",
"base_model:adapter:LyliaEngine/Pony_Diffusion_V6_XL",
"license:cdla-permissive-2.0",
"region:us"
] | text-to-image | "2024-05-31T09:44:35Z" | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
(score_9, score_8_up), score_7_up, zPDXL, 1girl, upper body, black mantle,
earrings, cyberpunk, eyepatch, neon eyepatch, black hair, wild hair, long
hair, red eyes, looking at viewer, expressionless, dark, dark theme, black
sclera, konohagakure symbol, forehead protector, naruto \(series\),
<lora:Sinozick_Style_XL_Pony:1>, sinozick style
parameters:
negative_prompt: >-
(extra fingers, deformed hands, polydactyl:1.1), (worst quality, low
quality:1.2), bad quality, shiny, blurry, artists signature, (multiple
tails), nuzzle, censored, pixelated, zPDXL-neg, pointy ears,
output:
url: images/00012-3760017729.jpeg
- text: >-
(score_9, score_8_up), score_7_up, zPDXL, 1girl, white hair, short hair,
white eyes, mouth mask, looking at viewer, white kimono, red background,
film grain, cowboy shot <lora:Sinozick_Style_XL_Pony:1>, sinozick style
parameters:
negative_prompt: >-
(extra fingers, deformed hands, polydactyl:1.1), (worst quality, low
quality:1.2), bad quality, shiny, blurry, artists signature, (multiple
tails), nuzzle, censored, pixelated, zPDXL-neg, pointy ears,
output:
url: images/00019-46392353.jpeg
base_model: LyliaEngine/Pony_Diffusion_V6_XL
instance_prompt: sinozick style, flat color, dark theme
license: cdla-permissive-2.0
---
# Sinozick_Style_XL_Pony
<Gallery />
## Model description
Sinozick is an AI artist on Twitter I like a lot, the style he gets for his images is incredible, and I wanted to reproduce the best I could this style. I think he uses MidJourney, and SD can't replicate it very well, but I'm satisfied enough with it.
One flaw ;
It's better for OCs, using it with pre-made characters can reduce the impact of the style.
Activation Prompt : sinozick style
Helpful prompt Prompt : dark theme, flat color
If you enjoyed this LoRA, think about leaving a like and post some images ! Thanks ! <3
## Source
https://civitai.com/models/432483/sinozick-style-or-style-lora-or-pony
## Credit
https://civitai.com/user/LennonAI
## Trigger words
You should use `sinozick style` to trigger the image generation.
You should use `flat color` to trigger the image generation.
You should use `dark theme` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LyliaEngine/Sinozick_Style_XL_Pony/tree/main) them in the Files & versions tab.
|
Mimita6654/test | Mimita6654 | "2022-07-02T07:47:08Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-07-02T07:44:21Z" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
systemk/estimation-helpsteer-gemma-2-2b | systemk | "2025-03-27T05:42:02Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:google/gemma-2-2b-jpn-it",
"base_model:adapter:google/gemma-2-2b-jpn-it",
"license:gemma",
"region:us"
] | null | "2025-03-27T05:41:57Z" | ---
library_name: peft
license: gemma
base_model: google/gemma-2-2b-jpn-it
tags:
- generated_from_trainer
model-index:
- name: gemma-2-2b-evaluator-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2-2b-evaluator-v2
This model is a fine-tuned version of [google/gemma-2-2b-jpn-it](https://huggingface.co/google/gemma-2-2b-jpn-it) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8410
- Helpfulness Accuracy: 0.5109
- Helpfulness Spearmanr: 0.5327
- Helpfulness Kendalltau: 0.3736
- Helpfulness Pearsonr: 0.5339
- Helpfulness Rmse: 0.5903
- Helpfulness Mae: 0.4713
- Correctness Accuracy: 0.6014
- Correctness Spearmanr: 0.5531
- Correctness Kendalltau: 0.3943
- Correctness Pearsonr: 0.5859
- Correctness Rmse: 0.5292
- Correctness Mae: 0.4221
- Coherence Accuracy: 0.6998
- Coherence Spearmanr: 0.4640
- Coherence Kendalltau: 0.3232
- Coherence Pearsonr: 0.4909
- Coherence Rmse: 0.5011
- Coherence Mae: 0.4405
- Complexity Accuracy: 0.6064
- Complexity Spearmanr: -0.0021
- Complexity Kendalltau: -0.0026
- Complexity Pearsonr: -0.0076
- Complexity Rmse: 0.3605
- Complexity Mae: 0.3205
- Verbosity Accuracy: 0.6362
- Verbosity Spearmanr: 0.4071
- Verbosity Kendalltau: 0.2757
- Verbosity Pearsonr: 0.3365
- Verbosity Rmse: 0.3724
- Verbosity Mae: 0.3068
- Avg Accuracy: 0.6109
- Avg Spearmanr: 0.3910
- Avg Kendalltau: 0.2728
- Avg Pearsonr: 0.3879
- Avg Rmse: 0.4707
- Avg Mae: 0.3922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_min_lr
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Helpfulness Accuracy | Helpfulness Spearmanr | Helpfulness Kendalltau | Helpfulness Pearsonr | Helpfulness Rmse | Helpfulness Mae | Correctness Accuracy | Correctness Spearmanr | Correctness Kendalltau | Correctness Pearsonr | Correctness Rmse | Correctness Mae | Coherence Accuracy | Coherence Spearmanr | Coherence Kendalltau | Coherence Pearsonr | Coherence Rmse | Coherence Mae | Complexity Accuracy | Complexity Spearmanr | Complexity Kendalltau | Complexity Pearsonr | Complexity Rmse | Complexity Mae | Verbosity Accuracy | Verbosity Spearmanr | Verbosity Kendalltau | Verbosity Pearsonr | Verbosity Rmse | Verbosity Mae | Avg Accuracy | Avg Spearmanr | Avg Kendalltau | Avg Pearsonr | Avg Rmse | Avg Mae |
|:-------------:|:------:|:----:|:---------------:|:--------------------:|:---------------------:|:----------------------:|:--------------------:|:----------------:|:---------------:|:--------------------:|:---------------------:|:----------------------:|:--------------------:|:----------------:|:---------------:|:------------------:|:-------------------:|:--------------------:|:------------------:|:--------------:|:-------------:|:-------------------:|:--------------------:|:---------------------:|:-------------------:|:---------------:|:--------------:|:------------------:|:-------------------:|:--------------------:|:------------------:|:--------------:|:-------------:|:------------:|:-------------:|:--------------:|:------------:|:--------:|:-------:|
| No log | 0 | 0 | 6.0772 | 0.3807 | 0.0234 | 0.0151 | 0.0297 | 0.9409 | 0.7463 | 0.1471 | 0.0071 | 0.0048 | 0.0298 | 1.3459 | 1.1590 | 0.3698 | 0.0093 | 0.0059 | 0.0060 | 1.2779 | 1.0608 | 0.0258 | -0.0438 | -0.0291 | -0.0464 | 2.1264 | 1.9972 | 0.3698 | -0.0471 | -0.0310 | -0.0500 | 1.0197 | 0.7998 | 0.2586 | -0.0102 | -0.0069 | -0.0062 | 1.3422 | 1.1526 |
| 1.2731 | 0.2094 | 500 | 1.2058 | 0.4404 | 0.1512 | 0.0986 | 0.1462 | 0.7513 | 0.6121 | 0.5338 | 0.1290 | 0.0836 | 0.1806 | 0.6719 | 0.5388 | 0.6700 | 0.1137 | 0.0764 | 0.1443 | 0.5356 | 0.4510 | 0.6054 | -0.2206 | -0.1464 | -0.0710 | 0.4624 | 0.4287 | 0.6243 | 0.0942 | 0.0622 | -0.0191 | 0.4919 | 0.4087 | 0.5748 | 0.0535 | 0.0349 | 0.0762 | 0.5826 | 0.4878 |
| 0.9212 | 0.4188 | 1000 | 0.9210 | 0.4980 | 0.4385 | 0.3020 | 0.4357 | 0.5932 | 0.4747 | 0.5944 | 0.4142 | 0.2897 | 0.4702 | 0.5473 | 0.4357 | 0.7068 | 0.3103 | 0.2104 | 0.3688 | 0.4894 | 0.4244 | 0.6054 | -0.2491 | -0.1664 | -0.2532 | 0.4269 | 0.3880 | 0.6362 | 0.2771 | 0.1860 | 0.1669 | 0.3959 | 0.3246 | 0.6082 | 0.2382 | 0.1643 | 0.2377 | 0.4905 | 0.4095 |
| 0.8859 | 0.6283 | 1500 | 0.8554 | 0.4911 | 0.5129 | 0.3572 | 0.5111 | 0.5972 | 0.4769 | 0.5755 | 0.5366 | 0.3813 | 0.5659 | 0.5430 | 0.4334 | 0.6958 | 0.4329 | 0.3006 | 0.4621 | 0.5038 | 0.4418 | 0.6054 | -0.1199 | -0.0813 | -0.1286 | 0.3882 | 0.3489 | 0.6362 | 0.3749 | 0.2544 | 0.2937 | 0.3931 | 0.3275 | 0.6008 | 0.3475 | 0.2424 | 0.3409 | 0.4851 | 0.4057 |
| 0.7737 | 0.8377 | 2000 | 0.8410 | 0.5109 | 0.5327 | 0.3736 | 0.5339 | 0.5903 | 0.4713 | 0.6014 | 0.5531 | 0.3943 | 0.5859 | 0.5292 | 0.4221 | 0.6998 | 0.4640 | 0.3232 | 0.4909 | 0.5011 | 0.4405 | 0.6064 | -0.0021 | -0.0026 | -0.0076 | 0.3605 | 0.3205 | 0.6362 | 0.4071 | 0.2757 | 0.3365 | 0.3724 | 0.3068 | 0.6109 | 0.3910 | 0.2728 | 0.3879 | 0.4707 | 0.3922 |
### Framework versions
- PEFT 0.15.0
- Transformers 4.50.1
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1 |
BEE-spoke-data/TinyLlama-1.1bee | BEE-spoke-data | "2024-01-07T03:55:10Z" | 18 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"bees",
"beekeeping",
"honey",
"en",
"dataset:BEE-spoke-data/bees-internal",
"base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-240k-503b",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-intermediate-step-240k-503b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-09-19T22:30:43Z" | ---
license: apache-2.0
base_model: PY007/TinyLlama-1.1B-intermediate-step-240k-503b
tags:
- bees
- beekeeping
- honey
metrics:
- accuracy
inference:
parameters:
max_new_tokens: 64
do_sample: true
renormalize_logits: true
repetition_penalty: 1.05
no_repeat_ngram_size: 6
temperature: 0.9
top_p: 0.95
epsilon_cutoff: 0.0008
widget:
- text: In beekeeping, the term "queen excluder" refers to
example_title: Queen Excluder
- text: One way to encourage a honey bee colony to produce more honey is by
example_title: Increasing Honey Production
- text: The lifecycle of a worker bee consists of several stages, starting with
example_title: Lifecycle of a Worker Bee
- text: Varroa destructor is a type of mite that
example_title: Varroa Destructor
- text: In the world of beekeeping, the acronym PPE stands for
example_title: Beekeeping PPE
- text: The term "robbing" in beekeeping refers to the act of
example_title: Robbing in Beekeeping
- text: |-
Question: What's the primary function of drone bees in a hive?
Answer:
example_title: Role of Drone Bees
- text: To harvest honey from a hive, beekeepers often use a device known as a
example_title: Honey Harvesting Device
- text: >-
Problem: You have a hive that produces 60 pounds of honey per year. You
decide to split the hive into two. Assuming each hive now produces at a 70%
rate compared to before, how much honey will you get from both hives next
year?
To calculate
example_title: Beekeeping Math Problem
- text: In beekeeping, "swarming" is the process where
example_title: Swarming
pipeline_tag: text-generation
datasets:
- BEE-spoke-data/bees-internal
language:
- en
---
# TinyLlama-1.1bee 🐝

As we feverishly hit the refresh button on hf.co's homepage, on the hunt for the newest waifu chatbot to grace the AI stage, an epiphany struck us like a bee sting. What could we offer to the hive-mind of the community? The answer was as clear as honey—beekeeping, naturally. And thus, this un-bee-lievable model was born.
## Details
This model is a fine-tuned version of [PY007/TinyLlama-1.1B-intermediate-step-240k-503b](https://huggingface.co/PY007/TinyLlama-1.1B-intermediate-step-240k-503b) on the `BEE-spoke-data/bees-internal` dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4285
- Accuracy: 0.4969
```
***** eval metrics *****
eval_accuracy = 0.4972
eval_loss = 2.4283
eval_runtime = 0:00:53.12
eval_samples = 239
eval_samples_per_second = 4.499
eval_steps_per_second = 1.129
perplexity = 11.3391
```
## 📜 Intended Uses & Limitations 📜
### Intended Uses:
1. **Educational Engagement**: Whether you're a novice beekeeper, an enthusiast, or someone just looking to understand the buzz around bees, this model aims to serve as an informative and entertaining resource.
2. **General Queries**: Have questions about hive management, bee species, or honey extraction? Feel free to consult the model for general insights.
3. **Academic & Research Inspiration**: If you're diving into the world of apiculture studies or environmental science, our model could offer some preliminary insights and ideas.
### Limitations:
1. **Not a Beekeeping Expert**: As much as we admire bees and their hard work, this model is not a certified apiculturist. Please consult professional beekeeping resources or experts for serious decisions related to hive management, bee health, and honey production.
2. **Licensing**: Apache-2.0, following TinyLlama
3. **Infallibility**: Our model can err, just like any other piece of technology (or bee). Always double-check the information before applying it to your own hive or research.
4. **Ethical Constraints**: This model may not be used for any illegal or unethical activities, including but not limited to: bioterrorism & standard terrorism, harassment, or spreading disinformation.
## Training and evaluation data
While the full dataset is not yet complete and therefore not yet released for "safety reasons", you can check out a preliminary sample at: [bees-v0](https://huggingface.co/datasets/BEE-spoke-data/bees-v0)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 80085
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2.0
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__TinyLlama-1.1bee)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 29.15 |
| ARC (25-shot) | 30.55 |
| HellaSwag (10-shot) | 51.8 |
| MMLU (5-shot) | 24.25 |
| TruthfulQA (0-shot) | 39.01 |
| Winogrande (5-shot) | 54.46 |
| GSM8K (5-shot) | 0.23 |
| DROP (3-shot) | 3.74 |
|
TheBlueObserver/SmolLM2-1.7B-Instruct-MLX-393a7 | TheBlueObserver | "2024-12-20T10:45:33Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"onnx",
"transformers.js",
"mlx",
"conversational",
"en",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:quantized:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | "2024-12-20T10:44:42Z" | ---
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
tags:
- safetensors
- onnx
- transformers.js
- mlx
---
# TheBlueObserver/SmolLM2-1.7B-Instruct-MLX-393a7
The Model [TheBlueObserver/SmolLM2-1.7B-Instruct-MLX-393a7](https://huggingface.co/TheBlueObserver/SmolLM2-1.7B-Instruct-MLX-393a7) was
converted to MLX format from [HuggingFaceTB/SmolLM2-1.7B-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct)
using mlx-lm version **0.20.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("TheBlueObserver/SmolLM2-1.7B-Instruct-MLX-393a7")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
cpayne1303/cp2024-instruct | cpayne1303 | "2024-11-28T13:02:39Z" | 156 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:cpayne1303/cp2024",
"base_model:finetune:cpayne1303/cp2024",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-27T19:22:31Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
base_model: cpayne1303/cp2024
datasets:
- teknium/OpenHermes-2.5
model-index:
- name: cp2024-instruct
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 17.06
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cpayne1303/cp2024-instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 2.48
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cpayne1303/cp2024-instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.0
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cpayne1303/cp2024-instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.34
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cpayne1303/cp2024-instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.18
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cpayne1303/cp2024-instruct
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.85
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=cpayne1303/cp2024-instruct
name: Open LLM Leaderboard
---
## Model Description
This is a model using the llama2 architecture and only 30 million parameters. It is based off of <a href = "https://huggingface.co/cpayne1303/cp2024">this model</a> and was finetuned on approximately 85 million tokens of instruct data from the first 20000 rows of the openhermes 2.5 dataset with a low learning rate of 2e-6 and context length of 512.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cpayne1303__cp2024-instruct)
| Metric |Value|
|-------------------|----:|
|Avg. | 4.32|
|IFEval (0-Shot) |17.06|
|BBH (3-Shot) | 2.48|
|MATH Lvl 5 (4-Shot)| 0.00|
|GPQA (0-shot) | 1.34|
|MuSR (0-shot) | 3.18|
|MMLU-PRO (5-shot) | 1.85|
|
Chenchuhui/Icelandic-finetuned-no-data-augmentation | Chenchuhui | "2023-12-18T17:43:02Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:carlosdanielhernandezmena/wav2vec2-large-xlsr-53-icelandic-ep10-1000h",
"base_model:finetune:carlosdanielhernandezmena/wav2vec2-large-xlsr-53-icelandic-ep10-1000h",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-12-14T17:56:05Z" | ---
license: cc-by-4.0
base_model: carlosdanielhernandezmena/wav2vec2-large-xlsr-53-icelandic-ep10-1000h
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Icelandic-finetuned-no-data-augmentation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Icelandic-finetuned-no-data-augmentation
This model is a fine-tuned version of [carlosdanielhernandezmena/wav2vec2-large-xlsr-53-icelandic-ep10-1000h](https://huggingface.co/carlosdanielhernandezmena/wav2vec2-large-xlsr-53-icelandic-ep10-1000h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1262
- Wer: 0.1544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.2 | 10 | 0.2849 | 0.1667 |
| No log | 0.4 | 20 | 0.3085 | 0.1611 |
| No log | 0.6 | 30 | 0.3264 | 0.1600 |
| No log | 0.8 | 40 | 0.3149 | 0.1577 |
| 0.3456 | 1.0 | 50 | 0.3221 | 0.1588 |
| 0.3456 | 1.2 | 60 | 0.3183 | 0.1577 |
| 0.3456 | 1.4 | 70 | 0.3133 | 0.1510 |
| 0.3456 | 1.6 | 80 | 0.3069 | 0.1521 |
| 0.3456 | 1.8 | 90 | 0.2970 | 0.1521 |
| 0.2689 | 2.0 | 100 | 0.2782 | 0.1477 |
| 0.2689 | 2.2 | 110 | 0.2242 | 0.1488 |
| 0.2689 | 2.4 | 120 | 0.2257 | 0.1454 |
| 0.2689 | 2.6 | 130 | 0.2722 | 0.1521 |
| 0.2689 | 2.8 | 140 | 0.2564 | 0.1488 |
| 0.2312 | 3.0 | 150 | 0.1811 | 0.1521 |
| 0.2312 | 3.2 | 160 | 0.1762 | 0.1454 |
| 0.2312 | 3.4 | 170 | 0.2154 | 0.1421 |
| 0.2312 | 3.6 | 180 | 0.1873 | 0.1465 |
| 0.2312 | 3.8 | 190 | 0.2015 | 0.1465 |
| 0.2153 | 4.0 | 200 | 0.2402 | 0.1499 |
| 0.2153 | 4.2 | 210 | 0.2366 | 0.1454 |
| 0.2153 | 4.4 | 220 | 0.1890 | 0.1477 |
| 0.2153 | 4.6 | 230 | 0.1897 | 0.1499 |
| 0.2153 | 4.8 | 240 | 0.1777 | 0.1488 |
| 0.1986 | 5.0 | 250 | 0.1659 | 0.1555 |
| 0.1986 | 5.2 | 260 | 0.1936 | 0.1588 |
| 0.1986 | 5.4 | 270 | 0.2044 | 0.1532 |
| 0.1986 | 5.6 | 280 | 0.1958 | 0.1555 |
| 0.1986 | 5.8 | 290 | 0.1760 | 0.1521 |
| 0.1842 | 6.0 | 300 | 0.2056 | 0.1600 |
| 0.1842 | 6.2 | 310 | 0.1649 | 0.1532 |
| 0.1842 | 6.4 | 320 | 0.2269 | 0.1532 |
| 0.1842 | 6.6 | 330 | 0.1572 | 0.1488 |
| 0.1842 | 6.8 | 340 | 0.1890 | 0.1600 |
| 0.181 | 7.0 | 350 | 0.1757 | 0.1700 |
| 0.181 | 7.2 | 360 | 0.2322 | 0.1644 |
| 0.181 | 7.4 | 370 | 0.2644 | 0.1600 |
| 0.181 | 7.6 | 380 | 0.2047 | 0.1555 |
| 0.181 | 7.8 | 390 | 0.2406 | 0.1678 |
| 0.1751 | 8.0 | 400 | 0.2820 | 0.1678 |
| 0.1751 | 8.2 | 410 | 0.2965 | 0.1655 |
| 0.1751 | 8.4 | 420 | 0.2841 | 0.1667 |
| 0.1751 | 8.6 | 430 | 0.2527 | 0.1734 |
| 0.1751 | 8.8 | 440 | 0.5464 | 0.1700 |
| 0.1836 | 9.0 | 450 | 0.2185 | 0.1622 |
| 0.1836 | 9.2 | 460 | 0.2129 | 0.1857 |
| 0.1836 | 9.4 | 470 | 0.3367 | 0.1779 |
| 0.1836 | 9.6 | 480 | 0.1457 | 0.1644 |
| 0.1836 | 9.8 | 490 | 0.1320 | 0.1667 |
| 0.2141 | 10.0 | 500 | 0.1262 | 0.1544 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
hahaqwqw/q-FrozenLake-v1-4x4-noSlippery | hahaqwqw | "2025-03-21T00:36:06Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2025-03-21T00:36:03Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="hahaqwqw/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
matatonic/Mistral-Small-24B-Instruct-2501-5.0bpw-exl2 | matatonic | "2025-01-30T19:54:23Z" | 21 | 0 | vllm | [
"vllm",
"safetensors",
"mistral",
"text-generation",
"transformers",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"base_model:mistralai/Mistral-Small-24B-Base-2501",
"base_model:quantized:mistralai/Mistral-Small-24B-Base-2501",
"license:apache-2.0",
"text-generation-inference",
"5-bit",
"exl2",
"region:us"
] | text-generation | "2025-01-30T19:53:13Z" | ---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: apache-2.0
library_name: vllm
inference: false
base_model:
- mistralai/Mistral-Small-24B-Base-2501
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
tags:
- transformers
---
# Model Card for Mistral-Small-24B-Instruct-2501
Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models!
This model is an instruction-fine-tuned version of the base model: [Mistral-Small-24B-Base-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Base-2501).
Mistral Small can be deployed locally and is exceptionally "knowledge-dense", fitting in a single RTX 4090 or a 32GB RAM MacBook once quantized.
Perfect for:
- Fast response conversational agents.
- Low latency function calling.
- Subject matter experts via fine-tuning.
- Local inference for hobbyists and organizations handling sensitive data.
For enterprises that need specialized capabilities (increased context, particular modalities, domain specific knowledge, etc.), we will be releasing commercial models beyond what Mistral AI contributes to the community.
This release demonstrates our commitment to open source, serving as a strong base model.
Learn more about Mistral Small in our [blog post](https://mistral.ai/news/mistral-small-3/).
Model developper: Mistral AI Team
## Key Features
- **Multilingual:** Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish.
- **Agent-Centric:** Offers best-in-class agentic capabilities with native function calling and JSON outputting.
- **Advanced Reasoning:** State-of-the-art conversational and reasoning capabilities.
- **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes.
- **Context Window:** A 32k context window.
- **System Prompt:** Maintains strong adherence and support for system prompts.
- **Tokenizer:** Utilizes a Tekken tokenizer with a 131k vocabulary size.
## Benchmark results
### Human evaluated benchmarks
| Category | Gemma-2-27B | Qwen-2.5-32B | Llama-3.3-70B | Gpt4o-mini |
|----------|-------------|--------------|---------------|------------|
| Mistral is better | 0.536 | 0.496 | 0.192 | 0.200 |
| Mistral is slightly better | 0.196 | 0.184 | 0.164 | 0.204 |
| Ties | 0.052 | 0.060 | 0.236 | 0.160 |
| Other is slightly better | 0.060 | 0.088 | 0.112 | 0.124 |
| Other is better | 0.156 | 0.172 | 0.296 | 0.312 |
**Note**:
- We conducted side by side evaluations with an external third-party vendor, on a set of over 1k proprietary coding and generalist prompts.
- Evaluators were tasked with selecting their preferred model response from anonymized generations produced by Mistral Small 3 vs another model.
- We are aware that in some cases the benchmarks on human judgement starkly differ from publicly available benchmarks, but have taken extra caution in verifying a fair evaluation. We are confident that the above benchmarks are valid.
### Publicly accesible benchmarks
**Reasoning & Knowledge**
| Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
|------------|---------------|--------------|---------------|---------------|-------------|
| mmlu_pro_5shot_cot_instruct | 0.663 | 0.536 | 0.666 | 0.683 | 0.617 |
| gpqa_main_cot_5shot_instruct | 0.453 | 0.344 | 0.531 | 0.404 | 0.377 |
**Math & Coding**
| Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
|------------|---------------|--------------|---------------|---------------|-------------|
| humaneval_instruct_pass@1 | 0.848 | 0.732 | 0.854 | 0.909 | 0.890 |
| math_instruct | 0.706 | 0.535 | 0.743 | 0.819 | 0.761 |
**Instruction following**
| Evaluation | mistral-small-24B-instruct-2501 | gemma-2b-27b | llama-3.3-70b | qwen2.5-32b | gpt-4o-mini-2024-07-18 |
|------------|---------------|--------------|---------------|---------------|-------------|
| mtbench_dev | 8.35 | 7.86 | 7.96 | 8.26 | 8.33 |
| wildbench | 52.27 | 48.21 | 50.04 | 52.73 | 56.13 |
| arena_hard | 0.873 | 0.788 | 0.840 | 0.860 | 0.897 |
| ifeval | 0.829 | 0.8065 | 0.8835 | 0.8401 | 0.8499 |
**Note**:
- Performance accuracy on all benchmarks were obtained through the same internal evaluation pipeline - as such, numbers may vary slightly from previously reported performance
([Qwen2.5-32B-Instruct](https://qwenlm.github.io/blog/qwen2.5/), [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), [Gemma-2-27B-IT](https://huggingface.co/google/gemma-2-27b-it)).
- Judge based evals such as Wildbench, Arena hard and MTBench were based on gpt-4o-2024-05-13.
### Basic Instruct Template (V7-Tekken)
```
<s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST]
```
*`<system_prompt>`, `<user message>` and `<assistant response>` are placeholders.*
***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth***
## Usage
The model can be used with the following frameworks;
- [`vllm`](https://github.com/vllm-project/vllm): See [here](#vLLM)
- [`transformers`](https://github.com/huggingface/transformers): See [here](#Transformers)
### vLLM
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
**Note 1**: We recommond using a relatively low temperature, such as `temperature=0.15`.
**Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following
system prompt:
```
system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30.
When you're not sure about some information, you say that you don't have the information and don't make up anything.
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")"""
```
**_Installation_**
Make sure you install [`vLLM >= 0.6.4`](https://github.com/vllm-project/vllm/releases/tag/v0.6.4):
```
pip install --upgrade vllm
```
Also make sure you have [`mistral_common >= 1.5.2`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.2) installed:
```
pip install --upgrade mistral_common
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39).
#### Server
We recommand that you use Mistral-Small-24B-Instruct-2501 in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Mistral-Small-24B-Instruct-2501 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice
```
**Note:** Running Mistral-Small-24B-Instruct-2501 on GPU requires ~55 GB of GPU RAM in bf16 or fp16.
2. To ping the client you can use a simple Python snippet.
```py
import requests
import json
from datetime import datetime, timedelta
url = "http://<your-server>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Mistral-Small-24B-Instruct-2501"
messages = [
{
"role": "system",
"content": "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
},
{
"role": "user",
"content": "Give me 5 non-formal ways to say 'See you later' in French."
},
]
data = {"model": model, "messages": messages}
response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["content"])
# Sure, here are five non-formal ways to say "See you later" in French:
#
# 1. À plus tard
# 2. À plus
# 3. Salut
# 4. À toute
# 5. Bisous
#
# ```
# /\_/\
# ( o.o )
# > ^ <
# ```
```
### Function calling
Mistral-Small-24-Instruct-2501 is excellent at function / tool calling tasks via vLLM. *E.g.:*
<details>
<summary>Example</summary>
```py
import requests
import json
from huggingface_hub import hf_hub_download
from datetime import datetime, timedelta
url = "http://<your-url>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}
model = "mistralai/Mistral-Small-24B-Instruct-2501"
def load_system_prompt(repo_id: str, filename: str) -> str:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
today = datetime.today().strftime("%Y-%m-%d")
yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d")
model_name = repo_id.split("/")[-1]
return system_prompt.format(name=model_name, today=today, yesterday=yesterday)
SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt")
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The city to find the weather for, e.g. 'San Francisco'",
},
"state": {
"type": "string",
"description": "The state abbreviation, e.g. 'CA' for California",
},
"unit": {
"type": "string",
"description": "The unit for temperature",
"enum": ["celsius", "fahrenheit"],
},
},
"required": ["city", "state", "unit"],
},
},
},
{
"type": "function",
"function": {
"name": "rewrite",
"description": "Rewrite a given text for improved clarity",
"parameters": {
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "The input text to rewrite",
}
},
},
},
},
]
messages = [
{"role": "system", "content": SYSTEM_PROMPT},
{
"role": "user",
"content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.",
},
{
"role": "assistant",
"content": "",
"tool_calls": [
{
"id": "bbc5b7ede",
"type": "function",
"function": {
"name": "rewrite",
"arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}',
},
}
],
},
{
"role": "tool",
"content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}',
"tool_call_id": "bbc5b7ede",
"name": "rewrite",
},
{
"role": "assistant",
"content": "---\n\nOpenAI is a FOR-profit company.",
},
{
"role": "user",
"content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?",
},
]
data = {"model": model, "messages": messages, "tools": tools}
response = requests.post(url, headers=headers, data=json.dumps(data))
import ipdb; ipdb.set_trace()
print(response.json()["choices"][0]["message"]["tool_calls"])
# [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}]
```
</details>
#### Offline
```py
from vllm import LLM
from vllm.sampling_params import SamplingParams
from datetime import datetime, timedelta
SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
user_prompt = "Give me 5 non-formal ways to say 'See you later' in French."
messages = [
{
"role": "system",
"content": SYSTEM_PROMPT
},
{
"role": "user",
"content": user_prompt
},
]
# note that running this model on GPU requires over 60 GB of GPU RAM
llm = LLM(model=model_name, tokenizer_mode="mistral", tensor_parallel_size=8)
sampling_params = SamplingParams(max_tokens=512, temperature=0.15)
outputs = llm.chat(messages, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
# Sure, here are five non-formal ways to say "See you later" in French:
#
# 1. À plus tard
# 2. À plus
# 3. Salut
# 4. À toute
# 5. Bisous
#
# ```
# /\_/\
# ( o.o )
# > ^ <
# ```
```
### Transformers
If you want to use Hugging Face transformers to generate text, you can do something like this.
```py
from transformers import pipeline
import torch
messages = [
{"role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French."},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-Small-24B-Instruct-2501", max_new_tokens=256, torch_dtype=torch.bfloat16)
chatbot(messages)
```
### Ollama
[Ollama](https://github.com/ollama/ollama) can run this model locally on MacOS, Windows and Linux.
```
ollama run mistral-small
```
4-bit quantization (aliased to default):
```
ollama run mistral-small:24b-instruct-2501-q4_K_M
```
8-bit quantization:
```
ollama run mistral-small:24b-instruct-2501-q8_0
```
FP16:
```
ollama run mistral-small:24b-instruct-2501-fp16
``` |
aletorregrosab/caldeiro | aletorregrosab | "2025-01-15T17:56:03Z" | 14 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-01-15T17:18:24Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Caldeiro
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('aletorregrosab/caldeiro', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
hongngo/d9b7e1e9-09ca-44b5-8e0e-c662ed443109 | hongngo | "2025-01-28T14:16:04Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:adapter:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-28T12:27:10Z" | ---
library_name: peft
license: apache-2.0
base_model: 01-ai/Yi-1.5-9B-Chat-16K
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d9b7e1e9-09ca-44b5-8e0e-c662ed443109
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: 01-ai/Yi-1.5-9B-Chat-16K
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7a291b1d31d37167_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7a291b1d31d37167_train_data.json
type:
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/d9b7e1e9-09ca-44b5-8e0e-c662ed443109
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/7a291b1d31d37167_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 57452f82-4de3-4f56-93da-17c43b36a921
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 57452f82-4de3-4f56-93da-17c43b36a921
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# d9b7e1e9-09ca-44b5-8e0e-c662ed443109
This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1691 | 0.0056 | 200 | 0.3635 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
stablediffusionapi/deliberatev5-inpaintingsa | stablediffusionapi | "2024-03-25T14:48:12Z" | 44 | 0 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-03-25T14:46:09Z" | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Deliberate_v5-inpainting.safetensors API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "deliberatev5-inpaintingsa"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/deliberatev5-inpaintingsa)
Model link: [View model](https://modelslab.com/models/deliberatev5-inpaintingsa)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "deliberatev5-inpaintingsa",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
pjox/camembert-classical-fr-ner | pjox | "2023-01-18T14:15:25Z" | 0 | 0 | flair | [
"flair",
"Early Modern French",
"Historical",
"NER",
"token-classification",
"fr",
"dataset:freemner",
"license:apache-2.0",
"region:us"
] | token-classification | "2023-01-18T14:11:44Z" | ---
language: fr
tags:
- Early Modern French
- Historical
- NER
- flair
license: apache-2.0
datasets:
- freemner
library_name: flair
pipeline_tag: token-classification
---
<a href="https://portizs.eu/publication/2022/lrec/dalembert/">
<img width="300px" src="https://portizs.eu/publication/2020/acl/camembert/featured_huac8a9374dbd7d6a2cb77224540858ab4_463389_720x2500_fit_q100_h2_lanczos_3.webp">
</a>
# CamemBERT Early Modern French NER model
This model is fine-tuned version of a [CamemBERT model](https://huggingface.co/camembert-base) on the [FreEMNER corpus](https://doi.org/10.5281/zenodo.6481135) for Early Modern French. It was
introduced in [this paper](https://aclanthology.org/2022.coling-1.327/).
### BibTeX entry and citation info
```bibtex
@inproceedings{ortiz-suarez-gabay-2022-data,
title = "A Data-driven Approach to Named Entity Recognition for Early {M}odern {F}rench",
author = "Ortiz Suarez, Pedro and
Gabay, Simon",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.327",
pages = "3722--3730",
abstract = "Named entity recognition has become an increasingly useful tool for digital humanities research, specially when it comes to historical texts. However, historical texts pose a wide range of challenges to both named entity recognition and natural language processing in general that are still difficult to address even with modern neural methods. In this article we focus in named entity recognition for historical French, and in particular for Early Modern French (16th-18th c.), i.e. Ancien R{\'e}gime French. However, instead of developing a specialised architecture to tackle the particularities of this state of language, we opt for a data-driven approach by developing a new corpus with fine-grained entity annotation, covering three centuries of literature corresponding to the early modern period; we try to annotate as much data as possible producing a corpus that is many times bigger than the most popular NER evaluation corpora for both Contemporary English and French. We then fine-tune existing state-of-the-art architectures for Early Modern and Contemporary French, obtaining results that are on par with those of the current state-of-the-art NER systems for Contemporary English. Both the corpus and the fine-tuned models are released.",
}
``` |
microsoft/unispeech-1350-en-353-fr-ft-1h | microsoft | "2021-12-19T23:14:27Z" | 47 | 0 | transformers | [
"transformers",
"pytorch",
"unispeech",
"automatic-speech-recognition",
"audio",
"fr",
"dataset:common_voice",
"arxiv:2101.07597",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
language:
- fr
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
---
# UniSpeech-Large-plus FRENCH
[Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
The large model pretrained on 16kHz sampled speech audio and phonetic labels and consequently fine-tuned on 1h of French phonemes.
When using the model make sure that your speech input is also sampled at 16kHz and your text in converted into a sequence of phonemes.
[Paper: UniSpeech: Unified Speech Representation Learning
with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597)
Authors: Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang
**Abstract**
*In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture information more correlated with phonetic structures and improve the generalization across languages and domains. We evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech recognition by a maximum of 13.4% and 17.8% relative phone error rate reductions respectively (averaged over all testing languages). The transferability of UniSpeech is also demonstrated on a domain-shift speech recognition task, i.e., a relative word error rate reduction of 6% against the previous approach.*
The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech.
# Usage
This is an speech model that has been fine-tuned on phoneme classification.
## Inference
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "microsoft/unispeech-1350-en-353-fr-ft-1h"
sample = next(iter(load_dataset("common_voice", "fr", split="test", streaming=True)))
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
prediction_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(prediction_ids)
# gives -> 'œ̃ v ʁ ɛ t ʁ a v a j ɛ̃ t e ʁ ɛ s ɑ̃ v a ɑ̃ f ɛ̃ ɛ t ʁ ə m ə n e s y ʁ s ə s y ʒ ɛ'
# for 'Un vrai travail intéressant va, enfin, être mené sur ce sujet.'
```
# Contribution
The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten).
# License
The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
# Official Results
See *UniSpeeech-L^{+}* - *fr*:
 |
JoBuettner/rl_course_vizdoom_health_gathering_supreme | JoBuettner | "2023-05-23T21:44:18Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-23T21:20:00Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 12.73 +/- 5.75
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r JoBuettner/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
AIBreaker25/ai | AIBreaker25 | "2025-02-13T08:27:07Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-02-13T08:27:01Z" | ---
license: apache-2.0
---
|
sambanovasystems/SambaLingo-Thai-Chat | sambanovasystems | "2024-04-16T22:38:55Z" | 197 | 37 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"th",
"en",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"dataset:HuggingFaceH4/cai-conversation-harmless",
"arxiv:2404.05829",
"license:llama2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-15T22:46:41Z" | ---
language:
- th
- en
license: llama2
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
- HuggingFaceH4/cai-conversation-harmless
model-index:
- name: SambaLingo-Thai-Chat
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 52.73
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sambanovasystems/SambaLingo-Thai-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 78.42
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sambanovasystems/SambaLingo-Thai-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 43.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sambanovasystems/SambaLingo-Thai-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 40.84
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sambanovasystems/SambaLingo-Thai-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 72.22
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sambanovasystems/SambaLingo-Thai-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 8.57
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sambanovasystems/SambaLingo-Thai-Chat
name: Open LLM Leaderboard
---
# SambaLingo-Thai-Chat
<img src="SambaLingo_Logo.png" width="340" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
<!-- Provide a quick summary of what the model is/does. -->
SambaLingo-Thai-Chat is a human aligned chat model trained in Thai and English. It is trained using direct preference optimization on top the base model [SambaLingo-Thai-Base](https://huggingface.co/sambanovasystems/SambaLingo-Thai-Base). The base model adapts [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) to Thai by training on 38 billion tokens from the Thai split of the [Cultura-X](https://huggingface.co/datasets/uonlp/CulturaX) dataset. Try This Model at [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space).
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Model type:** Language Model
- **Language(s):** Thai, English
- **Finetuned from model:** [Llama-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf)
- **Try This Model:** [SambaLingo-chat-space](https://huggingface.co/spaces/sambanovasystems/SambaLingo-chat-space)
- **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
## Getting Started
### Loading Model With Hugging Face
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Thai-Chat", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Thai-Chat", device_map="auto", torch_dtype="auto")
```
### Interacting With Model Pipeline
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Thai-Chat", device_map="auto", use_fast=False)
messages = [
{"role": "user", "content": {YOUR_QUESTION}},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt)[0]
outputs = outputs["generated_text"]
```
### Suggested Inference Parameters
- Temperature: 0.8
- Repetition penalty: 1.0
- Top-p: 0.9
### Prompting Guidelines
To prompt this model, please use the following chat template:
```
<|user|>\n{question}</s>\n<|assistant|>\n
```
### Example Prompts and Generations
```
<|user|>
ปรัชญาทางเศรษฐกิจที่พระบาทสมเด็จพระมหาภูมิพลอดุลยเดชมหาราช บรมนาถบพิตรมีพระราชดำรัสแก่ชาวไทย มีชื่อว่าอะไร</s>
<|assistant|>
ปรัชญาทางเศรษฐกิจที่พระบาทสมเด็จพระปรมินทรมหาภูมิพลอดุลยเดชทรงมีพระราชดำรัสแก่ชาวไทยเรียกว่า "ปรัชญาเศรษฐกิจพอเพียง" ปรัชญานี้เน้นความสำคัญของการพัฒนาแบบองค์รวมแ ละความสมดุล โดยคำนึงถึงสภาพแวดล้อมทางสังคม เศรษฐกิจ และสิ่งแวดล้อม ปรัชญาเศรษฐกิจพอเพียงเน้นความสำคัญของการพึ่งพาตนเอง ความมีเหตุผล และการประหยัด ตลอดจนความสำคัญของก ารพัฒนาจิตใจและจิตวิญญาณ ปรัชญานี้เน้นย้ำถึงความสำคัญของการดำเนินชีวิตอย่างพอประมาณ มีเหตุผล และมีคุณธรรม ตลอดจนความสำคัญของการพัฒนาความรู้และทักษะอย่างต่อเนื่อง ปรัชญาเศรษฐกิ จพอเพียงได้รับการยกย่องอย่างกว้างขวางว่าเป็นแนวทางแก้ไขปัญหาการพัฒนาที่ยั่งยืนและเป็นแนวทางในการดำเนินชีวิตที่นำไปสู่ความสุขและความเป็นอยู่ที่ดี
```
## Training Details
The alignment phase follows the recipe for [Zephyr-7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
The SFT phase was done on the [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
The DPO phase was done on the [ultrafeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) dataset and [cai-conversation-harmless](https://huggingface.co/datasets/HuggingFaceH4/cai-conversation-harmless) dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
```
@misc{csaki2024sambalingo,
title={SambaLingo: Teaching Large Language Models New Languages},
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
year={2024},
eprint={2404.05829},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sambanovasystems__SambaLingo-Thai-Chat)
| Metric |Value|
|---------------------------------|----:|
|Avg. |49.45|
|AI2 Reasoning Challenge (25-Shot)|52.73|
|HellaSwag (10-Shot) |78.42|
|MMLU (5-Shot) |43.95|
|TruthfulQA (0-shot) |40.84|
|Winogrande (5-shot) |72.22|
|GSM8k (5-shot) | 8.57|
|
HamZurger/Taxi-V3 | HamZurger | "2023-07-30T18:45:37Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-30T18:45:36Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="HamZurger/Taxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mlc-ai/DeepSeek-R1-Distill-Llama-70B-q4f16_1-MLC | mlc-ai | "2025-01-21T08:03:05Z" | 15 | 0 | mlc-llm | [
"mlc-llm",
"web-llm",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Llama-70B",
"region:us"
] | null | "2025-01-21T02:19:24Z" | ---
library_name: mlc-llm
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
tags:
- mlc-llm
- web-llm
---
# DeepSeek-R1-Distill-Llama-70B-q4f16_1-MLC
This is the [DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) model in MLC format `q4f16_1`.
The model can be used for projects [MLC-LLM](https://github.com/mlc-ai/mlc-llm) and [WebLLM](https://github.com/mlc-ai/web-llm).
## Example Usage
Here are some examples of using this model in MLC LLM.
Before running the examples, please install MLC LLM by following the [installation documentation](https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages).
### Chat
In command line, run
```bash
mlc_llm chat HF://mlc-ai/DeepSeek-R1-Distill-Llama-70B-q4f16_1-MLC
```
### REST Server
In command line, run
```bash
mlc_llm serve HF://mlc-ai/DeepSeek-R1-Distill-Llama-70B-q4f16_1-MLC
```
### Python API
```python
from mlc_llm import MLCEngine
# Create engine
model = "HF://mlc-ai/DeepSeek-R1-Distill-Llama-70B-q4f16_1-MLC"
engine = MLCEngine(model)
# Run chat completion in OpenAI API.
for response in engine.chat.completions.create(
messages=[{"role": "user", "content": "What is the meaning of life?"}],
model=model,
stream=True,
):
for choice in response.choices:
print(choice.delta.content, end="", flush=True)
print("\n")
engine.terminate()
```
## Documentation
For more information on MLC LLM project, please visit our [documentation](https://llm.mlc.ai/docs/) and [GitHub repo](http://github.com/mlc-ai/mlc-llm).
|
YakovElm/Hyperledger5SetFitModel_balance_ratio_2 | YakovElm | "2023-06-01T09:28:12Z" | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2023-06-01T09:27:31Z" | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/Hyperledger5SetFitModel_balance_ratio_2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Hyperledger5SetFitModel_balance_ratio_2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
PhillipGuo/hp-lat-llama-genericized_diff_hp_indices-epsilon10.0-pgd_layer15-def_layer0-harmless-102 | PhillipGuo | "2024-05-21T02:31:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-21T02:31:05Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
0x1202/65ff08e2-c60e-4c92-999a-e6285a3d2ad1 | 0x1202 | "2025-01-14T03:19:03Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:adapter:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
] | null | "2025-01-14T02:23:17Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 65ff08e2-c60e-4c92-999a-e6285a3d2ad1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2f22bb3438fdbdeb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2f22bb3438fdbdeb_train_data.json
type:
field_instruction: english
field_output: korean
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: 0x1202/65ff08e2-c60e-4c92-999a-e6285a3d2ad1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 400
micro_batch_size: 2
mlflow_experiment_name: /tmp/2f22bb3438fdbdeb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: be31c0da-9110-4803-817b-2f1d224c3c6a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: be31c0da-9110-4803-817b-2f1d224c3c6a
warmup_steps: 30
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 65ff08e2-c60e-4c92-999a-e6285a3d2ad1
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 30
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 1.6306 |
| 1.4292 | 0.0027 | 100 | 1.0239 |
| 1.6539 | 0.0054 | 200 | 0.9393 |
| 1.4139 | 0.0081 | 300 | 0.9036 |
| 1.2223 | 0.0108 | 400 | 0.8997 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF | bartowski | "2024-03-13T21:57:05Z" | 269 | 3 | null | [
"gguf",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-09T17:08:49Z" | ---
license: other
license_name: deepseek
license_link: https://huggingface.co/deepseek-ai/deepseek-coder-33b-base/blob/main/LICENSE
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp Quantizations of WhiteRabbitNeo-7B-v1.5a
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2354">b2354</a> for quantization.
Original model: https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-7B-v1.5a
Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [WhiteRabbitNeo-7B-v1.5a-Q8_0.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF/blob/main/WhiteRabbitNeo-7B-v1.5a-Q8_0.gguf) | Q8_0 | 7.16GB | Extremely high quality, generally unneeded but max available quant. |
| [WhiteRabbitNeo-7B-v1.5a-Q6_K.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF/blob/main/WhiteRabbitNeo-7B-v1.5a-Q6_K.gguf) | Q6_K | 5.53GB | Very high quality, near perfect, *recommended*. |
| [WhiteRabbitNeo-7B-v1.5a-Q5_K_M.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF/blob/main/WhiteRabbitNeo-7B-v1.5a-Q5_K_M.gguf) | Q5_K_M | 4.78GB | High quality, very usable. |
| [WhiteRabbitNeo-7B-v1.5a-Q5_K_S.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF/blob/main/WhiteRabbitNeo-7B-v1.5a-Q5_K_S.gguf) | Q5_K_S | 4.65GB | High quality, very usable. |
| [WhiteRabbitNeo-7B-v1.5a-Q5_0.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF/blob/main/WhiteRabbitNeo-7B-v1.5a-Q5_0.gguf) | Q5_0 | 4.65GB | High quality, older format, generally not recommended. |
| [WhiteRabbitNeo-7B-v1.5a-Q4_K_M.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF/blob/main/WhiteRabbitNeo-7B-v1.5a-Q4_K_M.gguf) | Q4_K_M | 4.08GB | Good quality, similar to 4.25 bpw. |
| [WhiteRabbitNeo-7B-v1.5a-Q4_K_S.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF/blob/main/WhiteRabbitNeo-7B-v1.5a-Q4_K_S.gguf) | Q4_K_S | 3.85GB | Slightly lower quality with small space savings. |
| [WhiteRabbitNeo-7B-v1.5a-Q4_0.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF/blob/main/WhiteRabbitNeo-7B-v1.5a-Q4_0.gguf) | Q4_0 | 3.82GB | Decent quality, older format, generally not recommended. |
| [WhiteRabbitNeo-7B-v1.5a-Q3_K_L.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF/blob/main/WhiteRabbitNeo-7B-v1.5a-Q3_K_L.gguf) | Q3_K_L | 3.59GB | Lower quality but usable, good for low RAM availability. |
| [WhiteRabbitNeo-7B-v1.5a-Q3_K_M.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF/blob/main/WhiteRabbitNeo-7B-v1.5a-Q3_K_M.gguf) | Q3_K_M | 3.29GB | Even lower quality. |
| [WhiteRabbitNeo-7B-v1.5a-Q3_K_S.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF/blobblob/main/WhiteRabbitNeo-7B-v1.5a-Q3_K_S.gguf) | Q3_K_S | 2.94GB | Low quality, not recommended. |
| [WhiteRabbitNeo-7B-v1.5a-Q2_K.gguf](https://huggingface.co/bartowski/WhiteRabbitNeo-7B-v1.5a-GGUF/blob/main/WhiteRabbitNeo-7B-v1.5a-Q2_K.gguf) | Q2_K | 2.53GB | Extremely low quality, *not* recommended.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF | mradermacher | "2025-02-03T06:45:30Z" | 181 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Deev124/hermes-llama3-roleplay-2000-v3",
"base_model:quantized:Deev124/hermes-llama3-roleplay-2000-v3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-11-21T16:09:18Z" | ---
base_model: Deev124/hermes-llama3-roleplay-2000-v3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Deev124/hermes-llama3-roleplay-2000-v3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/hermes-llama3-roleplay-2000-v3-i1-GGUF/resolve/main/hermes-llama3-roleplay-2000-v3.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
JordanTensor/gemma-sandbagging-ppvvz1jq-step1024 | JordanTensor | "2025-02-20T15:30:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-02-20T15:30:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
claterza/dqn-SpaceInvadersNoFrameskip-v4 | claterza | "2022-12-29T03:03:21Z" | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-29T03:02:44Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 606.50 +/- 131.09
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga claterza -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga claterza -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga claterza
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
ittailup/lallama-13b-lora | ittailup | "2023-08-13T22:55:06Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-13T22:53:53Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
zyoscovits/phrasebank-sentiment-analysis | zyoscovits | "2023-10-26T19:24:03Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-10-26T19:23:40Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
datasets:
- financial_phrasebank
metrics:
- f1
- accuracy
model-index:
- name: phrasebank-sentiment-analysis
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: financial_phrasebank
type: financial_phrasebank
config: sentences_50agree
split: train
args: sentences_50agree
metrics:
- name: F1
type: f1
value: 0.8486113627644013
- name: Accuracy
type: accuracy
value: 0.8624484181568088
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phrasebank-sentiment-analysis
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the financial_phrasebank dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5636
- F1: 0.8486
- Accuracy: 0.8624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|
| 0.5724 | 0.94 | 100 | 0.3946 | 0.8310 | 0.8508 |
| 0.2665 | 1.89 | 200 | 0.3415 | 0.8520 | 0.8618 |
| 0.1302 | 2.83 | 300 | 0.4812 | 0.8543 | 0.8686 |
| 0.0548 | 3.77 | 400 | 0.5636 | 0.8486 | 0.8624 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
pszemraj/opt-125m-email-generation | pszemraj | "2023-11-19T22:19:09Z" | 218 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"opt",
"text-generation",
"generated_from_trainer",
"custom-license",
"non-commercial",
"email",
"auto-complete",
"125m",
"dataset:aeslc",
"base_model:facebook/opt-125m",
"base_model:finetune:facebook/opt-125m",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-07-10T01:24:47Z" | ---
license: other
tags:
- generated_from_trainer
- opt
- custom-license
- non-commercial
- email
- auto-complete
- 125m
datasets:
- aeslc
widget:
- text: 'Hey <NAME>,
Thank you for signing up for my weekly newsletter. Before we get started, you''ll
have to confirm your email address.'
example_title: newsletter
- text: 'Hi <NAME>,
I hope this email finds you well. Let me start by saying that I am a big fan of
your work.'
example_title: fan
- text: 'Greetings <NAME>,
I hope you had a splendid evening at the Company sausage eating festival. I am
reaching out because'
example_title: festival
- text: 'Good Morning <NAME>,
I was just thinking to myself about how much I love creating value'
example_title: value
- text: URGENT - I need
example_title: URGENT
parameters:
min_length: 4
max_length: 64
length_penalty: 0.7
no_repeat_ngram_size: 3
do_sample: false
num_beams: 4
early_stopping: true
repetition_penalty: 3.5
use_fast: false
base_model: facebook/opt-125m
---
> NOTE: there is currently a bug with huggingface API for OPT models. Please use the [colab notebook](https://colab.research.google.com/gist/pszemraj/033dc9a38da31ced7a0343091ba42e31/email-autocomplete-demo-125m.ipynb) to test :)
# opt for email generation - 125m
Why write the rest of your email when you can generate it?
```
from transformers import pipeline
model_tag = "pszemraj/opt-125m-email-generation"
generator = pipeline(
'text-generation',
model=model_tag,
use_fast=False,
do_sample=False,
)
prompt = """
Hello,
Following up on the bubblegum shipment."""
generator(
prompt,
max_length=96,
) # generate
```
- [colab notebook](https://colab.research.google.com/gist/pszemraj/033dc9a38da31ced7a0343091ba42e31/email-autocomplete-demo-125m.ipynb) for testing/use
## About
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on an `aeslc` dataset.
- Emails, phone numbers, etc., were attempted to be excluded in a dataset preparation step using [clean-text](https://pypi.org/project/clean-text/) in Python.
- Note that API is restricted to generating 64 tokens - you can generate longer emails by using this in a text-generation `pipeline` object
It achieves the following results on the evaluation set:
- Loss: 2.5552
## Intended uses & limitations
- OPT models cannot be used commercially
- [here is a GitHub gist](https://gist.github.com/pszemraj/c1b0a76445418b6bbddd5f9633d1bb7f) for a script to generate emails in the console or to a text file.
## Training and evaluation data
- the `email_body` field of train + validation (get more data) from the [aeslc](https://huggingface.co/datasets/aeslc) dataset.
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8245 | 1.0 | 129 | 2.8030 |
| 2.521 | 2.0 | 258 | 2.6343 |
| 2.2074 | 3.0 | 387 | 2.5595 |
| 2.0145 | 4.0 | 516 | 2.5552 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
Cartinoe5930/Einstein-v4-7B_weight_diff | Cartinoe5930 | "2024-06-23T04:05:51Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-23T04:03:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bassemessam/Arabic-bank77-intent-classification | bassemessam | "2024-05-06T16:50:01Z" | 107 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-30T14:27:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fedovtt/f05cd706-9db0-47af-8fdb-c64b24fbfa6c | fedovtt | "2025-01-18T23:42:28Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:samoline/f6a49c16-f6d5-4405-a50a-9f416357ffe3",
"base_model:adapter:samoline/f6a49c16-f6d5-4405-a50a-9f416357ffe3",
"region:us"
] | null | "2025-01-18T23:35:43Z" | ---
library_name: peft
base_model: samoline/f6a49c16-f6d5-4405-a50a-9f416357ffe3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f05cd706-9db0-47af-8fdb-c64b24fbfa6c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: samoline/f6a49c16-f6d5-4405-a50a-9f416357ffe3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
ds_type: json
format: custom
path: /workspace/input_data/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: fedovtt/f05cd706-9db0-47af-8fdb-c64b24fbfa6c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/train_c4393383-ef1d-4e9c-b95c-18b4f735570d.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 22bf9415-03e7-41f5-b432-a11c378f3057
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 22bf9415-03e7-41f5-b432-a11c378f3057
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# f05cd706-9db0-47af-8fdb-c64b24fbfa6c
This model is a fine-tuned version of [samoline/f6a49c16-f6d5-4405-a50a-9f416357ffe3](https://huggingface.co/samoline/f6a49c16-f6d5-4405-a50a-9f416357ffe3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | nan |
| 0.0 | 0.0010 | 5 | nan |
| 0.0 | 0.0020 | 10 | nan |
| 0.0 | 0.0031 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
PrunaAI/hrnet_w18.ms_aug_in1k-turbo-green-smashed | PrunaAI | "2024-08-02T15:30:28Z" | 1 | 0 | pruna-engine | [
"pruna-engine",
"region:us"
] | null | "2024-03-10T03:51:38Z" | ---
library_name: pruna-engine
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/rskEr4BZJx) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed by combining quantization, xformers, jit, cuda graphs, triton.
- ***How does the model quality change?*** The quality of the model output might slightly vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We used a custom Pruna model format based on pickle to make models compatible with the compression methods. We provide a tutorial to run models in dockers in the documentation [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) if needed.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check that you have linux, python 3.10, and cuda 12.1.0 requirements installed. For cuda, check with `nvcc --version` and install with `conda install nvidia/label/cuda-12.1.0::cuda`.
1. Install the `pruna-engine` available [here](https://pypi.org/project/pruna-engine/) on Pypi. It might take up to 15 minutes to install.
```bash
pip install pruna-engine[gpu]==0.7.1 --extra-index-url https://pypi.nvidia.com --extra-index-url https://pypi.ngc.nvidia.com --extra-index-url https://prunaai.pythonanywhere.com/
```
2. Download the model files using one of these three options.
- Option 1 - Use command line interface (CLI):
```bash
mkdir hrnet_w18.ms_aug_in1k-turbo-green-smashed
huggingface-cli download PrunaAI/hrnet_w18.ms_aug_in1k-turbo-green-smashed --local-dir hrnet_w18.ms_aug_in1k-turbo-green-smashed --local-dir-use-symlinks False
```
- Option 2 - Use Python:
```python
import subprocess
repo_name = "hrnet_w18.ms_aug_in1k-turbo-green-smashed"
subprocess.run(["mkdir", repo_name])
subprocess.run(["huggingface-cli", "download", 'PrunaAI/'+ repo_name, "--local-dir", repo_name, "--local-dir-use-symlinks", "False"])
```
- Option 3 - Download them manually on the HuggingFace model page.
3. Load & run the model.
```python
from pruna_engine.PrunaModel import PrunaModel
model_path = "hrnet_w18.ms_aug_in1k-turbo-green-smashed/model" # Specify the downloaded model path.
smashed_model = PrunaModel.load_model(model_path) # Load the model.
import torch; image = torch.rand(1, 3, 224, 224).to('cuda')
smashed_model(image)
```
## Configurations
The configuration info are in `model/smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model hrnet_w18.ms_aug_in1k before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
booydar/RMT-Llama-3.2-1B-Instruct-2x1024-mem16-lora-babilong-qa1-5_ct-v2 | booydar | "2024-12-23T12:39:44Z" | 5 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2024-12-23T12:35:57Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
davidschulte/ESM_tab_fact_tab_fact_new | davidschulte | "2025-03-25T12:12:40Z" | 14 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:wenhu/tab_fact",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2025-01-08T17:01:17Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- wenhu/tab_fact
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM wenhu/tab_fact
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** wenhu/tab_fact
- **ESM architecture:** linear
- **ESM embedding dimension:** 768
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
- **ESM version:** 0.1.0
## Training Details
### Intermediate Task
- **Task ID:** wenhu/tab_fact
- **Subset [optional]:** tab_fact
- **Text Column:** table_text
- **Label Column:** label
- **Dataset Split:** train
- **Sample size [optional]:** 10000
- **Sample seed [optional]:** 42
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps used for?
Embedding Space Maps are a part of ESM-LogME, a efficient method for finding intermediate datasets for transfer learning. There are two reasons to use ESM-LogME:
### You don't have enough training data for your problem
If you don't have a enough training data for your problem, just use ESM-LogME to find more.
You can supplement model training by including publicly available datasets in the training process.
1. Fine-tune a language model on suitable intermediate dataset.
2. Fine-tune the resulting model on your target dataset.
This workflow is called intermediate task transfer learning and it can significantly improve the target performance.
But what is a suitable dataset for your problem? ESM-LogME enable you to quickly rank thousands of datasets on the Hugging Face Hub by how well they are exptected to transfer to your target task.
### You want to find similar datasets to your target dataset
Using ESM-LogME can be used like search engine on the Hugging Face Hub. You can find similar tasks to your target task without having to rely on heuristics. ESM-LogME estimates how language models fine-tuned on each intermediate task would benefinit your target task. This quantitative approach combines the effects of domain similarity and task similarity.
## How can I use ESM-LogME / ESMs?
[](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
```python
1. davanstrien/test_imdb_embedd2 Score: -0.618529
2. davanstrien/test_imdb_embedd Score: -0.618644
3. davanstrien/test1 Score: -0.619334
4. stanfordnlp/imdb Score: -0.619454
5. stanfordnlp/sst Score: -0.62995
```
| Rank | Task ID | Task Subset | Text Column | Label Column | Task Split | Num Examples | ESM Architecture | Score |
|-------:|:------------------------------|:----------------|:--------------|:---------------|:-------------|---------------:|:-------------------|----------:|
| 1 | davanstrien/test_imdb_embedd2 | default | text | label | train | 10000 | linear | -0.618529 |
| 2 | davanstrien/test_imdb_embedd | default | text | label | train | 10000 | linear | -0.618644 |
| 3 | davanstrien/test1 | default | text | label | train | 10000 | linear | -0.619334 |
| 4 | stanfordnlp/imdb | plain_text | text | label | train | 10000 | linear | -0.619454 |
| 5 | stanfordnlp/sst | dictionary | phrase | label | dictionary | 10000 | linear | -0.62995 |
| 6 | stanfordnlp/sst | default | sentence | label | train | 8544 | linear | -0.63312 |
| 7 | kuroneko5943/snap21 | CDs_and_Vinyl_5 | sentence | label | train | 6974 | linear | -0.634365 |
| 8 | kuroneko5943/snap21 | Video_Games_5 | sentence | label | train | 6997 | linear | -0.638787 |
| 9 | kuroneko5943/snap21 | Movies_and_TV_5 | sentence | label | train | 6989 | linear | -0.639068 |
| 10 | fancyzhx/amazon_polarity | amazon_polarity | content | label | train | 10000 | linear | -0.639718 |
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector). We provide documentation further documentation and tutorials for finding intermediate datasets and training your own ESMs.
## How do Embedding Space Maps work?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://aclanthology.org/2024.emnlp-main.529/).
**BibTeX:**
```
@inproceedings{schulte-etal-2024-less,
title = "Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning",
author = "Schulte, David and
Hamborg, Felix and
Akbik, Alan",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.529/",
doi = "10.18653/v1/2024.emnlp-main.529",
pages = "9431--9442",
abstract = "Intermediate task transfer learning can greatly improve model performance. If, for example, one has little training data for emotion detection, first fine-tuning a language model on a sentiment classification dataset may improve performance strongly. But which task to choose for transfer learning? Prior methods producing useful task rankings are infeasible for large source pools, as they require forward passes through all source language models. We overcome this by introducing Embedding Space Maps (ESMs), light-weight neural networks that approximate the effect of fine-tuning a language model. We conduct the largest study on NLP task transferability and task selection with 12k source-target pairs. We find that applying ESMs on a prior method reduces execution time and disk space usage by factors of 10 and 278, respectively, while retaining high selection performance (avg. regret@5 score of 2.95)."
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024, November). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (pp. 9431-9442).
```
## Additional Information
|
vocabtrimmer/mt5-small-itquad-qg-trimmed-it-30000 | vocabtrimmer | "2023-03-15T11:12:32Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-03-15T10:59:37Z" | # Vocabulary Trimmed [lmqg/mt5-small-itquad-qg](https://huggingface.co/lmqg/mt5-small-itquad-qg): `vocabtrimmer/mt5-small-itquad-qg-trimmed-it-30000`
This model is a trimmed version of [lmqg/mt5-small-itquad-qg](https://huggingface.co/lmqg/mt5-small-itquad-qg) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | lmqg/mt5-small-itquad-qg | vocabtrimmer/mt5-small-itquad-qg-trimmed-it-30000 |
|:---------------------------|:---------------------------|:----------------------------------------------------|
| parameter_size_full | 300,165,504 | 74,784,128 |
| parameter_size_embedding | 256,103,424 | 30,722,048 |
| vocab_size | 250,101 | 30,002 |
| compression_rate_full | 100.0 | 24.91 |
| compression_rate_embedding | 100.0 | 12.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| it | vocabtrimmer/mc4_validation | text | it | validation | 30000 | 2 | |
XelotX/Tess-v2.5.2-Qwen2-72B-XelotX-Quants-BF16 | XelotX | "2024-07-12T16:28:28Z" | 4 | 0 | null | [
"gguf",
"text-generation",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-07-12T16:28:28Z" | ---
license: other
license_name: qwen2
license_link: https://huggingface.co/Qwen/Qwen2-72B/blob/main/LICENSE
quantized_by: bartowski
pipeline_tag: text-generation
---
## Llamacpp imatrix Quantizations of Tess-v2.5.2-Qwen2-72B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3145">b3145</a> for quantization.
Original model: https://huggingface.co/migtissera/Tess-v2.5.2-Qwen2-72B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Tess-v2.5.2-Qwen2-72B-Q8_0.gguf](https://huggingface.co/bartowski/Tess-v2.5.2-Qwen2-72B-GGUF/tree/main/Tess-v2.5.2-Qwen2-72B-Q8_0.gguf) | Q8_0 | 79.59GB | Extremely high quality, generally unneeded but max available quant. |
| [Tess-v2.5.2-Qwen2-72B-Q5_K_M.gguf](https://huggingface.co/bartowski/Tess-v2.5.2-Qwen2-72B-GGUF/tree/main/Tess-v2.5.2-Qwen2-72B-Q5_K_M.gguf) | Q5_K_M | 57.55GB | High quality, *recommended*. |
| [Tess-v2.5.2-Qwen2-72B-Q4_K_M.gguf](https://huggingface.co/bartowski/Tess-v2.5.2-Qwen2-72B-GGUF/tree/main/Tess-v2.5.2-Qwen2-72B-Q4_K_M.gguf) | Q4_K_M | 50.67GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
| [Tess-v2.5.2-Qwen2-72B-IQ4_XS.gguf](https://huggingface.co/bartowski/Tess-v2.5.2-Qwen2-72B-GGUF/blob/main/Tess-v2.5.2-Qwen2-72B-IQ4_XS.gguf) | IQ4_XS | 43.00GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [Tess-v2.5.2-Qwen2-72B-Q3_K_M.gguf](https://huggingface.co/bartowski/Tess-v2.5.2-Qwen2-72B-GGUF/blob/main/Tess-v2.5.2-Qwen2-72B-Q3_K_M.gguf) | Q3_K_M | 41.12GB | Even lower quality. |
| [Tess-v2.5.2-Qwen2-72B-IQ3_M.gguf](https://huggingface.co/bartowski/Tess-v2.5.2-Qwen2-72B-GGUF/blob/main/Tess-v2.5.2-Qwen2-72B-IQ3_M.gguf) | IQ3_M | 38.92GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [Tess-v2.5.2-Qwen2-72B-Q3_K_S.gguf](https://huggingface.co/bartowski/Tess-v2.5.2-Qwen2-72B-GGUF/blob/main/Tess-v2.5.2-Qwen2-72B-Q3_K_S.gguf) | Q3_K_S | 37.91GB | Low quality, not recommended. |
| [Tess-v2.5.2-Qwen2-72B-IQ3_XXS.gguf](https://huggingface.co/bartowski/Tess-v2.5.2-Qwen2-72B-GGUF/blob/main/Tess-v2.5.2-Qwen2-72B-IQ3_XXS.gguf) | IQ3_XXS | 35.43GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [Tess-v2.5.2-Qwen2-72B-Q2_K.gguf](https://huggingface.co/bartowski/Tess-v2.5.2-Qwen2-72B-GGUF/blob/main/Tess-v2.5.2-Qwen2-72B-Q2_K.gguf) | Q2_K | 33.36GB | Very low quality but surprisingly usable. |
| [Tess-v2.5.2-Qwen2-72B-IQ2_M.gguf](https://huggingface.co/bartowski/Tess-v2.5.2-Qwen2-72B-GGUF/blob/main/Tess-v2.5.2-Qwen2-72B-IQ2_M.gguf) | IQ2_M | 32.93GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
| [Tess-v2.5.2-Qwen2-72B-IQ2_XS.gguf](https://huggingface.co/bartowski/Tess-v2.5.2-Qwen2-72B-GGUF/blob/main/Tess-v2.5.2-Qwen2-72B-IQ2_XS.gguf) | IQ2_XS | 30.77GB | Lower quality, uses SOTA techniques to be usable. |
| [Tess-v2.5.2-Qwen2-72B-IQ2_XXS.gguf](https://huggingface.co/bartowski/Tess-v2.5.2-Qwen2-72B-GGUF/blob/main/Tess-v2.5.2-Qwen2-72B-IQ2_XXS.gguf) | IQ2_XXS | 29.20GB | Lower quality, uses SOTA techniques to be usable. |
| [Tess-v2.5.2-Qwen2-72B-IQ1_M.gguf](https://huggingface.co/bartowski/Tess-v2.5.2-Qwen2-72B-GGUF/blob/main/Tess-v2.5.2-Qwen2-72B-IQ1_M.gguf) | IQ1_M | 27.45GB | Extremely low quality, *not* recommended. |
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Tess-v2.5.2-Qwen2-72B-GGUF --include "Tess-v2.5.2-Qwen2-72B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Tess-v2.5.2-Qwen2-72B-GGUF --include "Tess-v2.5.2-Qwen2-72B-Q8_0.gguf/*" --local-dir Tess-v2.5.2-Qwen2-72B-Q8_0
```
You can either specify a new local-dir (Tess-v2.5.2-Qwen2-72B-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
keras/deeplab_v3_plus_resnet50_pascalvoc | keras | "2025-03-24T22:40:07Z" | 231 | 1 | keras-hub | [
"keras-hub",
"image-segmentation",
"arxiv:1802.02611",
"license:apache-2.0",
"region:us"
] | image-segmentation | "2024-10-28T20:10:29Z" | ---
library_name: keras-hub
license: apache-2.0
tags:
- image-segmentation
pipeline_tag: image-segmentation
---
### Model Overview
DeepLabv3+ model is developed by Google for semantic segmentation. This guide demonstrates how to finetune and use DeepLabv3+ model for image semantic segmentaion with KerasCV. Its architecture that combines atrous convolutions, contextual information aggregation, and powerful backbones to achieve accurate and detailed semantic segmentation. The DeepLabv3+ model has been shown to achieve state-of-the-art results on a variety of image segmentation benchmarks. This model is supported in both KerasCV and KerasHub. KerasCV will no longer be actively developed, so please try to use KerasHub.
`
Weights are released under the [Apache 2 License](https://apache.org/licenses/LICENSE-2.0). Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
## Links
* [DeepLabV3Plus Quickstart Notebook](https://www.kaggle.com/code/prasadsachin/deeplabv3plus-quickstart)
* [DeepLabV3Plus Finetune Notebook](https://www.kaggle.com/code/prasadsachin/deeplabv3plus-finetune-notebook/)
* [DeepLabV3Plus API Documentation](https://keras.io/api/keras_hub/models/deeplab_v3/)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|------------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| deeplab_v3_plus_resnet50_pascalvoc | 39.1M | DeeplabV3Plus with a ResNet50 v2 backbone. Trained on PascalVOC 2012 Semantic segmentation task, which consists of 20 classes and one background class. This model achieves a final categorical accuracy of 89.34% and mIoU of 0.6391 on evaluation dataset. This preset is only comptabile with Keras 3. |
## Model card
https://arxiv.org/abs/1802.02611
## Example Usage
Load DeepLabv3+ presets a extension of DeepLabv3 by adding a simple yet
effective decoder module to refine the segmentation results especially
along object boundaries.
```python
images = np.ones(shape=(1, 96, 96, 3))
labels = np.zeros(shape=(1, 96, 96, 2))
segmenter = keras_hub.models.DeepLabV3ImageSegmenter.from_preset(
"deeplab_v3_plus_resnet50_pascalvoc",
)
segmenter.predict(images)
```
Specify `num_classes` to load randomly initialized segmentation head.
```python
segmenter = keras_hub.models.DeepLabV3ImageSegmenter.from_preset(
"deeplab_v3_plus_resnet50_pascalvoc",
num_classes=2,
)
segmenter.preprocessor.image_size = (96, 96)
segmenter.fit(images, labels, epochs=3)
segmenter.predict(images) # Trained 2 class segmentation.
```
## Example Usage with Hugging Face URI
Load DeepLabv3+ presets a extension of DeepLabv3 by adding a simple yet
effective decoder module to refine the segmentation results especially
along object boundaries.
```python
images = np.ones(shape=(1, 96, 96, 3))
labels = np.zeros(shape=(1, 96, 96, 2))
segmenter = keras_hub.models.DeepLabV3ImageSegmenter.from_preset(
"hf://keras/deeplab_v3_plus_resnet50_pascalvoc",
)
segmenter.predict(images)
```
Specify `num_classes` to load randomly initialized segmentation head.
```python
segmenter = keras_hub.models.DeepLabV3ImageSegmenter.from_preset(
"hf://keras/deeplab_v3_plus_resnet50_pascalvoc",
num_classes=2,
)
segmenter.preprocessor.image_size = (96, 96)
segmenter.fit(images, labels, epochs=3)
segmenter.predict(images) # Trained 2 class segmentation.
```
|
ksmcg/controlnet-room-seg | ksmcg | "2023-04-26T11:11:12Z" | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"jax-diffusers-event",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-04-26T07:48:55Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- jax-diffusers-event
inference: true
---
# controlnet- ksmcg/controlnet-room-seg
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning. You can find some example images in the following.
|
sd-concepts-library/between2-mt-fade | sd-concepts-library | "2022-09-09T18:20:56Z" | 0 | 2 | null | [
"license:mit",
"region:us"
] | null | "2022-09-09T18:12:25Z" | ---
license: mit
---
### between2 MT fade on Stable Diffusion
This is the `<b2MTfade>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





















|
Aotsuyu/Soushiki | Aotsuyu | "2023-12-11T17:37:25Z" | 0 | 7 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-11-21T02:52:04Z" | ---
license: creativeml-openrail-m
---
<center>
<h1 style="font-size:1.6rem;">
<b>
Soushiki (葬式)
</b>
</h1>
</center>
The model some previously may have downloaded as `9-AAAAAAA` is now renamed to SoushikiV0.9.
For 1.0 I very slightly slightly increased finger separation (it doesn't necessarily mean better hands, rather it's less likely that hands become an eldritch mess).
I bumped the vibrance a bit.
Do not hang up on the versions too much, 0.9 might be more to your liking.
### previews
<img src="https://huggingface.co/Aotsuyu/Soushiki/resolve/main/img/xyz_grid-0043-2155415307.png" width="50%"></img>
<img src="https://huggingface.co/Aotsuyu/Soushiki/resolve/main/img/xyz_grid-0044-384282104.png" width="50%"></img>
<img src="https://huggingface.co/Aotsuyu/Soushiki/resolve/main/img/xyz_grid-0048-3671274917.png" width="50%"></img>
<img src="https://huggingface.co/Aotsuyu/Soushiki/resolve/main/img/xyz_grid-0049-73335132.png" width="50%"></img>
<img src="https://huggingface.co/Aotsuyu/Soushiki/resolve/main/img/xyz_grid-0050-667165639.png" width="50%"></img>
### settings
~~Use whatever the fuck you want.~~
I keep my negative to `low quality, worst quality`, and usually use DPM++ 2M Karras or Restart samplers at 20-25 steps with 5-10 cfg.
kl-f8-anime2 or a variant for VAE.
### ramblings
Yet another fairly shizo merge. I started off from Gishiki, adding elements I liked from CTD's lovely connectmix3 and Jemnite's amazing TangYuanV3.
This time I also utilized some artist LoRA and some diffLoRA to push the textures, contrast, details noise offset and such into directions I wanted.
The artstyle loras were mostly mine, but I got some off of [Derrian](https://civitai.com/user/derriandistro/models) and one anon.
I consider this less style neutral from what I usually go for, but I had no trouble using LoRAs with it.
This is a product of autistic XYZ comparison and abusing supermerger and I find it satisfying for now.
### does this have new holos
No, I can't be arsed to make a finetune from scratch, maybe sometime in the future that will change.
Use this or whatever for holo advent https://huggingface.co/Aotsuyu/LoRA/blob/main/characters/%5Bchars%5D%20holoadventv5.safetensors
|
mshen2/llama3-8b-v4-short-em-up | mshen2 | "2025-03-20T01:33:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-20T01:29:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/SASTRI_V0.1-i1-GGUF | mradermacher | "2024-11-16T08:55:09Z" | 105 | 0 | transformers | [
"transformers",
"gguf",
"hi",
"en",
"bh",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-11-16T07:12:07Z" | ---
base_model: divyanshukunwar/SASTRI_V0.1
language:
- hi
- en
- bh
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/divyanshukunwar/SASTRI_V0.1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SASTRI_V0.1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/SASTRI_V0.1-i1-GGUF/resolve/main/SASTRI_V0.1.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kylelovesllms/Wav2Vec2Ipa | kylelovesllms | "2024-12-09T03:18:36Z" | 28 | 0 | null | [
"pytorch",
"wav2vec2",
"generated_from_trainer",
"dataset:timit_asr",
"license:apache-2.0",
"region:us"
] | null | "2024-11-07T23:21:07Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- timit_asr
model-index:
- name: Wav2Vec2Ipa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2Ipa
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the timit_asr dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.17.0
- Pytorch 2.5.0+cu121
- Datasets 1.18.3
- Tokenizers 0.19.1
|
havinash-ai/f3309fbc-3fdd-47fe-9fba-87dd620d6cdc | havinash-ai | "2025-02-09T05:39:13Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/codellama-7b",
"base_model:adapter:unsloth/codellama-7b",
"license:apache-2.0",
"region:us"
] | null | "2025-02-09T03:44:38Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/codellama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f3309fbc-3fdd-47fe-9fba-87dd620d6cdc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# f3309fbc-3fdd-47fe-9fba-87dd620d6cdc
This model is a fine-tuned version of [unsloth/codellama-7b](https://huggingface.co/unsloth/codellama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
AlignmentResearch/robust_llm_pythia-tt-410m-mz-ada-v3-ch-135000 | AlignmentResearch | "2024-03-22T22:13:04Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-410m-deduped",
"base_model:finetune:EleutherAI/pythia-410m-deduped",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-03-22T22:12:14Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-410m-deduped
model-index:
- name: robust_llm_pythia-tt-410m-mz-ada-v3-ch-135000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-tt-410m-mz-ada-v3-ch-135000
This model is a fine-tuned version of [EleutherAI/pythia-410m-deduped](https://huggingface.co/EleutherAI/pythia-410m-deduped) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Triangle104/EVA-abliterated-TIES-Qwen2.5-1.5B-Q6_K-GGUF | Triangle104 | "2025-01-31T21:37:19Z" | 22 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:nbeerbower/EVA-abliterated-TIES-Qwen2.5-1.5B",
"base_model:quantized:nbeerbower/EVA-abliterated-TIES-Qwen2.5-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-31T21:37:10Z" | ---
base_model: nbeerbower/EVA-abliterated-TIES-Qwen2.5-1.5B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: apache-2.0
---
# Triangle104/EVA-abliterated-TIES-Qwen2.5-1.5B-Q6_K-GGUF
This model was converted to GGUF format from [`nbeerbower/EVA-abliterated-TIES-Qwen2.5-1.5B`](https://huggingface.co/nbeerbower/EVA-abliterated-TIES-Qwen2.5-1.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/EVA-abliterated-TIES-Qwen2.5-1.5B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/EVA-abliterated-TIES-Qwen2.5-1.5B-Q6_K-GGUF --hf-file eva-abliterated-ties-qwen2.5-1.5b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/EVA-abliterated-TIES-Qwen2.5-1.5B-Q6_K-GGUF --hf-file eva-abliterated-ties-qwen2.5-1.5b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/EVA-abliterated-TIES-Qwen2.5-1.5B-Q6_K-GGUF --hf-file eva-abliterated-ties-qwen2.5-1.5b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/EVA-abliterated-TIES-Qwen2.5-1.5B-Q6_K-GGUF --hf-file eva-abliterated-ties-qwen2.5-1.5b-q6_k.gguf -c 2048
```
|
NghiemAbe/gpt-neo-vi-small-v6 | NghiemAbe | "2024-03-28T06:34:53Z" | 175 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-07T13:16:27Z" | ---
base_model: NghiemAbe/gpt-neo-vi-small-v5
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-vi-small-v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-vi-small-v6
This model is a fine-tuned version of [NlpHUST/gpt-neo-vi-small](https://huggingface.co/NlpHUST/gpt-neo-vi-small) on an [ViQuad](https://huggingface.co/datasets/NghiemAbe/viquad) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3849 | 0.1 | 25 | 0.5802 |
| 0.3747 | 0.19 | 50 | 0.5680 |
| 0.3843 | 0.29 | 75 | 0.5695 |
| 0.4016 | 0.39 | 100 | 0.5782 |
| 0.4101 | 0.49 | 125 | 0.5563 |
| 0.4011 | 0.58 | 150 | 0.5162 |
| 0.3729 | 0.68 | 175 | 0.4888 |
| 0.3512 | 0.78 | 200 | 0.4544 |
| 0.316 | 0.88 | 225 | 0.4319 |
| 0.3126 | 0.97 | 250 | 0.4156 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
MirageML/lowpoly-landscape | MirageML | "2023-05-05T20:53:06Z" | 11 | 5 | diffusers | [
"diffusers",
"stable-diffusion",
"text-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2022-11-28T08:57:57Z" | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
# Low Poly Landscape on Stable Diffusion via Dreambooth
This the Stable Diffusion model fine-tuned the Low Poly Landscape concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of lowpoly_landscape**
# Run on [Mirage](https://app.mirageml.com)
Run this model and explore text-to-3D on [Mirage](https://app.mirageml.com)!
Here are is a sample output for this model:

# Share your Results and Reach us on [Discord](https://discord.gg/9B2Pu2bEvj)!
[](https://discord.gg/9B2Pu2bEvj)
[Image Source](https://www.deviantart.com/kautsar211086/art/Long-Time-No-See-457162094) |
data-corentinv/bloom-fourthbrain-hackathon-1b7-lora-ads | data-corentinv | "2023-04-05T21:05:48Z" | 4 | 0 | transformers | [
"transformers",
"bloom",
"text-generation",
"fourthbrain",
"dtlb",
"text2text-generation",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-04-05T19:18:38Z" | ---
license: mit
pipeline_tag: text2text-generation
language:
- en
metrics:
- accuracy
library_name: transformers
tags:
- fourthbrain
- dtlb
--- |
stablediffusionapi/urpm-v13 | stablediffusionapi | "2025-01-20T11:25:47Z" | 20 | 6 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-07-18T02:50:34Z" | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# Uber Realistic Porn Merge V1.3 API Inference

## Get API Key
Get API key from [ModelsLab](https://modelslab.com/), No Payment needed.
Replace Key in below code, change **model_id** to "urpm-v13"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/urpm-v13)
Model link: [View model](https://stablediffusionapi.com/models/urpm-v13)
Credits: [View credits](https://civitai.com/?query=Uber%20Realistic%20Porn%20Merge%20V1.3)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "urpm-v13",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Subsets and Splits