modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-12 06:28:41
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 498
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-12 06:28:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/E-Model-V1-i1-GGUF
|
mradermacher
| 2025-08-11T06:37:27Z | 162 | 0 |
transformers
|
[
"transformers",
"gguf",
"chemistry",
"tr",
"dataset:BrewInteractive/alpaca-tr",
"dataset:ituperceptron/turkish_medical_reasoning",
"base_model:MeowML/E-Model-V1",
"base_model:quantized:MeowML/E-Model-V1",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-30T01:56:47Z |
---
base_model: MeowML/E-Model-V1
datasets:
- BrewInteractive/alpaca-tr
- ituperceptron/turkish_medical_reasoning
language:
- tr
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- chemistry
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/MeowML/E-Model-V1
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#E-Model-V1-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/E-Model-V1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ1_S.gguf) | i1-IQ1_S | 1.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ2_S.gguf) | i1-IQ2_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ2_M.gguf) | i1-IQ2_M | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q2_K.gguf) | i1-Q2_K | 2.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ3_S.gguf) | i1-IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ3_M.gguf) | i1-IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q4_0.gguf) | i1-Q4_0 | 4.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q4_1.gguf) | i1-Q4_1 | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/E-Model-V1-i1-GGUF/resolve/main/E-Model-V1.i1-Q6_K.gguf) | i1-Q6_K | 6.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754894156
|
kapalbalap
| 2025-08-11T06:36:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T06:36:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
3ZadeSSG/RT-MPINet
|
3ZadeSSG
| 2025-08-11T06:36:56Z | 0 | 0 | null |
[
"view-synthesis",
"rendering",
"multiplane",
"multiplane-image",
"mpi",
"image-to-image",
"license:agpl-3.0",
"region:us"
] |
image-to-image
| 2025-08-04T13:43:33Z |
---
license: agpl-3.0
pipeline_tag: image-to-image
tags:
- view-synthesis
- rendering
- multiplane
- multiplane-image
- mpi
---
<div align="center">
<a href="#"><img src='https://img.shields.io/badge/-Paper-00629B?style=flat&logo=ieee&logoColor=white' alt='arXiv'></a>
<a href='https://realistic3d-miun.github.io/Research/RT_MPINet/index.html'><img src='https://img.shields.io/badge/Project_Page-Website-green?logo=googlechrome&logoColor=white' alt='Project Page'></a>
<a href='https://huggingface.co/spaces/3ZadeSSG/RT-MPINet'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo_(RT_MPINet)-blue'></a>
</div>
# RT-MPINet
#### Real-Time View Synthesis with Multiplane Image Network using Multimodal Supervision (RT-MPINet)
We present a real-time multiplane image (MPI) network. Unlike existing MPI based approaches that often rely on a separate depth estimation network to guide the network for estimating MPI parameters, our method directly predicts these parameters from a single RGB image. To guide the network we present a multimodal training strategy utilizing joint supervision from view synthesis and depth estimation losses. More details can be found in the paper.
**Please head to the [Project Page](https://realistic3d-miun.github.io/Research/RT_MPINet/index.html) to see supplementary materials**
## Setup
1. Clone the GitHub repository:
```bash
git clone https://github.com/Realistic3D-MIUN/RT-MPINet
cd RT-MPINet
```
2. Install dependencies:
```bash
pip install -r requirements.txt
```
3. Install PyTorch3D after the general libs have been installed
```bash
pip install "pytorch3d @ git+https://github.com/facebookresearch/pytorch3d.git@89653419d0973396f3eff1a381ba09a07fffc2ed"
```
## Checkpoints (Best Checkpoints Will Be Updated Soon)
Pretrained model checkpoints should be placed in the `checkpoint/` directory. Example filenames:
- `checkpoint_RT_MPI_Small.pth`
- `checkpoint_RT_MPI_Medium.pth`
- `checkpoint_RT_MPI_Large.pth`
| Model | Size | Parameters | Checkpoint |
|-----------------|--------|------------|----------------|
| Small | 26 MB | 6.6 Million| [Download](https://huggingface.co/3ZadeSSG/RT-MPINet/resolve/main/checkpoint_RT_MPI_Small.pth) |
| Medium (Default)| 278 MB | 69 Million | [Download](https://huggingface.co/3ZadeSSG/RT-MPINet/resolve/main/checkpoint_RT_MPI_Medium.pth) |
| Large | 1.2 GB | 288 Million| [Download](https://huggingface.co/3ZadeSSG/RT-MPINet/resolve/main/checkpoint_RT_MPI_Large.pth) |
## Usage
### 1. Live Rendering Demo
You can load any image and run the model inference each time the camera position is changed. This will be limited to the inference speed on your GPU.
```bash
python renderLiveWithMouseControl.py \
--input_image <path_to_image> \
--model_type <small|medium|large> \
--checkpoint_path <path_to_checkpoint> \
--height <height> \
--width <width>
```
Example:
```bash
python renderLiveWithMouseControl.py \
--input_image ./samples/moon.jpg \
--model_type medium \
--checkpoint_path ./checkpoint/checkpoint_RT_MPI_Medium.pth \
--height 256 \
--width 256
```
### 2. Inference: Predict MPIs from an image and render afterwards
The predicted MPIs can be used for offline rendering, which is much faster as the model isn't being queried each time camera changes. This requires
* First predicting the MPIs
```bash
python predictMPIs.py \
--input_image <path_to_image> \
--model_type <small|medium|large> \
--checkpoint_path <path_to_checkpoint> \
--save_dir <output_dir> \
--height <height> \
--width <width>
```
* Second the MPIs are loaded and views are rendered without invoking the model using
```bash
python renderPreProcessedWithMouseControl.py \
--layer_path <output_dir> \
--height <height> \
--width <width>
```
Example:
```bash
python predictMPIs.py \
--input_image ./samples/moon.jpg \
--model_type medium \
--checkpoint_path ./checkpoint/checkpoint_RT_MPI_Medium.pth \
--save_dir ./processedLayers/ \
--height 384 \
--width 384
```
```bash
python renderPreProcessedWithMouseControl.py \
--layer_path ./processedLayers/ \
--height 384 \
--width 384
```
### 3. Web Demo (Gradio)
You can run the local demo of the Huggingface app to utilize your own GPU for faster inference using
```bash
python app.py
```
## Supported Resolutions
We have tested our model with following resolutions:
- 256x256
- 384x384
- 512x512
- 256x384
- 384x512
**Note:** If using non square aspect ratio, you need to modify the torch transform to account for changes.
## Acknowledgements
- We thank the authors of [AdaMPI](https://github.com/yxuhan/AdaMPI) for their implementation of the homography renderer which has been used in this codebase under `./utils` directory
- We tank the author of [Deepview renderer](https://github.com/Findeton/deepview) template, which was used in our project page.
## Citation
If you use our work please use following citation:
```
@inproceedings{gond2025rtmpi,
title={Real-Time View Synthesis with Multiplane Image Network using Multimodal Supervision},
author={Gond, Manu and Shamshirgarha, Mohammadreza and Zerman, Emin and Knorr, Sebastian and Sj{\"o}str{\"o}m, M{\aa}rten},
booktitle={2025 IEEE 27th International Workshop on Multimedia Signal Processing (MMSP)},
pages={},
year={2025},
organization={IEEE}
}
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754893990
|
ggozzy
| 2025-08-11T06:34:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T06:34:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754892690
|
Sayemahsjn
| 2025-08-11T06:29:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T06:29:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vengky/blockassist-bc-wild_gentle_manatee_1754891289
|
vengky
| 2025-08-11T06:27:59Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild gentle manatee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T06:27:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild gentle manatee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
haruntrkmn/ty-bg-remover-test
|
haruntrkmn
| 2025-08-11T06:27:04Z | 0 | 0 | null |
[
"onnx",
"computer-vision",
"image-background-removal",
"image-matting",
"e-commerce",
"is-net",
"image-segmentation",
"license:apache-2.0",
"region:us"
] |
image-segmentation
| 2025-08-06T07:54:16Z |
---
license: apache-2.0
pipeline_tag: image-segmentation
language: []
base_model: isnet-general-use.pth
model_type: ty_fashion_bg_remover
tags:
- computer-vision
- image-background-removal
- image-matting
- e-commerce
- is-net
---
# TY Fashion Background Remover
_TY Fashion Background Remover is an IS-Net–based human segmentation and background-removal model designed to automatically detect and isolate people in images. It produces high-quality binary/alpha masks and trimmed RGBA composites intended for downstream editing, compositing, and automated image pipelines. Although optimized for fashion photography, it is suitable for any application where the image contains human and the goal is to separate them cleanly from the background._
## Model Details
- **Architecture**: IS-Net
- **Objective**: Fine-tuning isnet-general-use model with TY fashion images to better performance of fashion images
- **Training Data**: Large-scale Trendyol fashion product image dataset containing human models
- **Hardware**: Multi-GPU training with PyTorch
- **Framework**: PyTorch
## Intended Use
- Automatically remove backgrounds from images containing human, isolating the subject for further editing, compositing, or analysis.
- Designed for use in applications such as e-commerce product photography, fashion catalogs, profile pictures, and creative media projects where the human subject needs to be cleanly separated from the background.
- Optimized for images with clear human presence; not intended for objects, animals, or scenes without people.
- Can be used as a preprocessing step for downstream tasks like virtual try-on, background replacement, and image-based content generation.
## Usage
Complete example to load the model, remove background of an image, and save the results:
```python
"""
ONNX inference script for image segmentation model.
This script loads an ONNX model and performs inference on an input image to generate
an alpha mask. The mask is combined with the RGB image and saved as output.
"""
import onnxruntime as ort
from utils import process_image
if __name__ == "__main__":
MODEL_PATH = "model.onnx"
SRC = "https://cdn.dsmcdn.com/ty184/product/media/images/20210924/23/136268224/224296134/1/1_org_zoom.jpg"
OUTPUT_FILE = "out.png"
# Initialize ONNX runtime session with CUDA and CPU providers
ort_session = ort.InferenceSession(
MODEL_PATH,
providers=["CUDAExecutionProvider", "CPUExecutionProvider"]
)
process_image(SRC, ort_session, MODEL_PATH, OUTPUT_FILE)
```
## Model Performance
- **Achieve high-accuracy image matting**: Especially for intricate details on human models, such as hair and clothing textures.
### Training Configuration
- **Backbone**: IS-Net general use model trained on DIS dataset V1.0: DIS5K
- **Model Input Size**: 1800x1200
- **Training Framework**: Torch 1.13.1
## Limitations
- **Domain Specificity**: Optimized for e-commerce fashion product images with human models included; may not generalize well to other image domains
- **Image Quality**: Performance may degrade on low-quality, heavily compressed, or significantly distorted images
- **Category Bias**: Performance may vary across different product categories based on training data distribution
## Ethical Considerations
- **Commercial Use**: Designed for e-commerce applications; consider potential impacts on market competition
- **Privacy**: Ensure compliance with data protection regulations when processing product images
- **Fairness**: Monitor for biased similarity judgments across different product categories or brands
## Citation
```bibtex
@misc{trendyol2025fashionbgremover,
title={TY Fashion Background Remover},
author={Trendyol Data Science Team},
year={2025},
howpublished={\url{https://huggingface.co/trendyol/ty-fashion-bg-remover}}
}
```
## Model Card Authors
- Trendyol Data Science Team
## License
This model is released by Trendyol as a source-available, non-open-source model.
### You are allowed to:
- View, download, and evaluate the model weights.
- Use the model for non-commercial research and internal testing.
- Use the model or its derivatives for commercial purposes, provided that:
- You cite Trendyol as the original model creator.
- You notify Trendyol in advance via [[email protected]] or other designated contact.
### You are not allowed to:
- Redistribute or host the model or its derivatives on third-party platforms without prior written consent from Trendyol.
- Use the model in applications violating ethical standards, including but not limited to surveillance, misinformation, or harm to individuals or groups.
By downloading or using this model, you agree to the terms above.
© 2025 Trendyol Teknoloji A.Ş. All rights reserved.
See the [LICENSE](LICENSE) file for more details.
---
_For technical support or questions about this model, please contact the Trendyol Data Science team._
|
m0vie/my_awesome_billsum_model
|
m0vie
| 2025-08-11T06:25:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T06:25:52Z |
---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4871
- Rouge1: 0.1521
- Rouge2: 0.0529
- Rougel: 0.1241
- Rougelsum: 0.1239
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 4.7238 | 0.0323 | 2 | 4.5056 | 0.1445 | 0.0494 | 0.1206 | 0.1207 | 20.0 |
| 4.7833 | 0.0645 | 4 | 4.3907 | 0.1452 | 0.0493 | 0.1213 | 0.1215 | 20.0 |
| 4.7564 | 0.0968 | 6 | 4.1875 | 0.1437 | 0.0478 | 0.1198 | 0.1198 | 20.0 |
| 4.6334 | 0.1290 | 8 | 4.0478 | 0.1445 | 0.048 | 0.1198 | 0.1199 | 20.0 |
| 4.4535 | 0.1613 | 10 | 3.9208 | 0.1452 | 0.048 | 0.1204 | 0.1204 | 20.0 |
| 4.0209 | 0.1935 | 12 | 3.7073 | 0.1459 | 0.0484 | 0.121 | 0.1209 | 20.0 |
| 3.7674 | 0.2258 | 14 | 3.5904 | 0.1437 | 0.0474 | 0.1198 | 0.1198 | 20.0 |
| 4.0694 | 0.2581 | 16 | 3.4991 | 0.1419 | 0.0456 | 0.1179 | 0.1179 | 20.0 |
| 3.695 | 0.2903 | 18 | 3.4001 | 0.1412 | 0.0447 | 0.1175 | 0.1174 | 20.0 |
| 3.5436 | 0.3226 | 20 | 3.3312 | 0.1416 | 0.0453 | 0.1177 | 0.1176 | 20.0 |
| 3.5757 | 0.3548 | 22 | 3.2724 | 0.1402 | 0.0445 | 0.1161 | 0.116 | 20.0 |
| 3.6838 | 0.3871 | 24 | 3.2079 | 0.1397 | 0.0434 | 0.1156 | 0.1155 | 20.0 |
| 3.7529 | 0.4194 | 26 | 3.1602 | 0.139 | 0.0424 | 0.1152 | 0.1152 | 20.0 |
| 3.4468 | 0.4516 | 28 | 3.1223 | 0.1383 | 0.0418 | 0.1149 | 0.1147 | 20.0 |
| 3.4188 | 0.4839 | 30 | 3.0881 | 0.1378 | 0.0418 | 0.1144 | 0.1142 | 20.0 |
| 3.2276 | 0.5161 | 32 | 3.0553 | 0.1372 | 0.0412 | 0.1138 | 0.1136 | 20.0 |
| 3.1193 | 0.5484 | 34 | 3.0277 | 0.1377 | 0.0421 | 0.1142 | 0.114 | 20.0 |
| 3.2673 | 0.5806 | 36 | 3.0018 | 0.1357 | 0.0405 | 0.1122 | 0.112 | 20.0 |
| 3.1799 | 0.6129 | 38 | 2.9748 | 0.1354 | 0.04 | 0.1115 | 0.1113 | 20.0 |
| 3.3082 | 0.6452 | 40 | 2.9513 | 0.1343 | 0.0402 | 0.1112 | 0.111 | 20.0 |
| 3.2299 | 0.6774 | 42 | 2.9296 | 0.1333 | 0.0393 | 0.1103 | 0.1102 | 20.0 |
| 3.0226 | 0.7097 | 44 | 2.9087 | 0.1328 | 0.0391 | 0.1101 | 0.11 | 20.0 |
| 3.1423 | 0.7419 | 46 | 2.8889 | 0.1329 | 0.0393 | 0.1102 | 0.1101 | 20.0 |
| 3.0891 | 0.7742 | 48 | 2.8701 | 0.1332 | 0.0398 | 0.1106 | 0.1105 | 20.0 |
| 3.2401 | 0.8065 | 50 | 2.8527 | 0.1328 | 0.0396 | 0.1103 | 0.1103 | 20.0 |
| 3.0209 | 0.8387 | 52 | 2.8360 | 0.1336 | 0.0405 | 0.1115 | 0.1114 | 20.0 |
| 3.0974 | 0.8710 | 54 | 2.8203 | 0.1331 | 0.0393 | 0.1108 | 0.1108 | 20.0 |
| 2.9769 | 0.9032 | 56 | 2.8057 | 0.132 | 0.0392 | 0.1101 | 0.1101 | 20.0 |
| 3.0385 | 0.9355 | 58 | 2.7920 | 0.131 | 0.0381 | 0.1091 | 0.109 | 20.0 |
| 3.2244 | 0.9677 | 60 | 2.7792 | 0.129 | 0.0368 | 0.1075 | 0.1075 | 20.0 |
| 2.9593 | 1.0 | 62 | 2.7729 | 0.1284 | 0.0363 | 0.1071 | 0.1071 | 20.0 |
| 2.9742 | 1.0323 | 64 | 2.7607 | 0.1295 | 0.0369 | 0.1077 | 0.1077 | 20.0 |
| 2.8829 | 1.0645 | 66 | 2.7494 | 0.1291 | 0.0366 | 0.107 | 0.1068 | 20.0 |
| 2.914 | 1.0968 | 68 | 2.7385 | 0.1297 | 0.0374 | 0.1079 | 0.1077 | 20.0 |
| 3.1647 | 1.1290 | 70 | 2.7280 | 0.1305 | 0.0381 | 0.1081 | 0.1081 | 20.0 |
| 3.0356 | 1.1613 | 72 | 2.7181 | 0.131 | 0.0391 | 0.1083 | 0.1082 | 20.0 |
| 3.0923 | 1.1935 | 74 | 2.7084 | 0.132 | 0.04 | 0.1092 | 0.1092 | 20.0 |
| 3.0 | 1.2258 | 76 | 2.6991 | 0.1333 | 0.0405 | 0.1101 | 0.1101 | 20.0 |
| 2.7403 | 1.2581 | 78 | 2.6904 | 0.1335 | 0.0402 | 0.1098 | 0.1098 | 20.0 |
| 3.0324 | 1.2903 | 80 | 2.6819 | 0.1334 | 0.041 | 0.11 | 0.11 | 20.0 |
| 3.1273 | 1.3226 | 82 | 2.6736 | 0.1329 | 0.041 | 0.1097 | 0.1096 | 20.0 |
| 2.9799 | 1.3548 | 84 | 2.6655 | 0.1329 | 0.0416 | 0.1097 | 0.1096 | 20.0 |
| 2.8665 | 1.3871 | 86 | 2.6578 | 0.1342 | 0.0418 | 0.1105 | 0.1104 | 20.0 |
| 2.9902 | 1.4194 | 88 | 2.6505 | 0.135 | 0.042 | 0.1109 | 0.1109 | 20.0 |
| 2.9665 | 1.4516 | 90 | 2.6436 | 0.135 | 0.0416 | 0.1111 | 0.111 | 20.0 |
| 3.056 | 1.4839 | 92 | 2.6369 | 0.1353 | 0.0422 | 0.1111 | 0.1111 | 20.0 |
| 2.7685 | 1.5161 | 94 | 2.6306 | 0.1358 | 0.0428 | 0.1116 | 0.1115 | 20.0 |
| 2.9515 | 1.5484 | 96 | 2.6247 | 0.1362 | 0.0426 | 0.1117 | 0.1116 | 20.0 |
| 2.6475 | 1.5806 | 98 | 2.6192 | 0.1363 | 0.0423 | 0.1117 | 0.1115 | 20.0 |
| 3.0313 | 1.6129 | 100 | 2.6138 | 0.1373 | 0.0429 | 0.1123 | 0.1122 | 20.0 |
| 2.7451 | 1.6452 | 102 | 2.6087 | 0.1377 | 0.0432 | 0.1129 | 0.1127 | 20.0 |
| 2.9397 | 1.6774 | 104 | 2.6039 | 0.1377 | 0.0434 | 0.1132 | 0.1131 | 20.0 |
| 2.8833 | 1.7097 | 106 | 2.5992 | 0.1382 | 0.0434 | 0.1135 | 0.1132 | 20.0 |
| 2.9797 | 1.7419 | 108 | 2.5943 | 0.1383 | 0.0429 | 0.1135 | 0.1133 | 20.0 |
| 2.8241 | 1.7742 | 110 | 2.5896 | 0.1383 | 0.0429 | 0.1136 | 0.1134 | 20.0 |
| 2.7139 | 1.8065 | 112 | 2.5853 | 0.1389 | 0.0424 | 0.1136 | 0.1134 | 20.0 |
| 2.9114 | 1.8387 | 114 | 2.5812 | 0.138 | 0.0421 | 0.1129 | 0.1127 | 20.0 |
| 2.8335 | 1.8710 | 116 | 2.5774 | 0.1382 | 0.0423 | 0.1128 | 0.1126 | 20.0 |
| 2.8012 | 1.9032 | 118 | 2.5740 | 0.1385 | 0.0439 | 0.1134 | 0.1132 | 20.0 |
| 2.8822 | 1.9355 | 120 | 2.5704 | 0.1385 | 0.044 | 0.1139 | 0.1138 | 20.0 |
| 3.0383 | 1.9677 | 122 | 2.5670 | 0.1397 | 0.045 | 0.1152 | 0.1152 | 20.0 |
| 2.9287 | 2.0 | 124 | 2.5636 | 0.1398 | 0.044 | 0.1147 | 0.1146 | 20.0 |
| 2.7666 | 2.0323 | 126 | 2.5601 | 0.1409 | 0.0443 | 0.1155 | 0.1154 | 20.0 |
| 2.5729 | 2.0645 | 128 | 2.5571 | 0.1414 | 0.0449 | 0.1157 | 0.1157 | 20.0 |
| 2.9942 | 2.0968 | 130 | 2.5543 | 0.1417 | 0.045 | 0.1159 | 0.1157 | 20.0 |
| 2.7203 | 2.1290 | 132 | 2.5516 | 0.1422 | 0.0455 | 0.1161 | 0.1161 | 20.0 |
| 2.7695 | 2.1613 | 134 | 2.5490 | 0.1434 | 0.0464 | 0.1169 | 0.1168 | 20.0 |
| 2.7066 | 2.1935 | 136 | 2.5465 | 0.1441 | 0.047 | 0.1173 | 0.1173 | 20.0 |
| 2.9297 | 2.2258 | 138 | 2.5440 | 0.1449 | 0.0479 | 0.118 | 0.118 | 20.0 |
| 2.872 | 2.2581 | 140 | 2.5415 | 0.145 | 0.048 | 0.1181 | 0.118 | 20.0 |
| 2.929 | 2.2903 | 142 | 2.5389 | 0.1457 | 0.0485 | 0.1186 | 0.1185 | 20.0 |
| 2.7474 | 2.3226 | 144 | 2.5363 | 0.1451 | 0.0481 | 0.1181 | 0.1179 | 20.0 |
| 2.9002 | 2.3548 | 146 | 2.5337 | 0.1445 | 0.048 | 0.1175 | 0.1173 | 20.0 |
| 2.8597 | 2.3871 | 148 | 2.5311 | 0.1449 | 0.0487 | 0.118 | 0.118 | 20.0 |
| 2.8553 | 2.4194 | 150 | 2.5287 | 0.1456 | 0.0492 | 0.1184 | 0.1183 | 20.0 |
| 2.8124 | 2.4516 | 152 | 2.5265 | 0.1459 | 0.049 | 0.1183 | 0.1182 | 20.0 |
| 2.9928 | 2.4839 | 154 | 2.5245 | 0.1466 | 0.0496 | 0.119 | 0.1189 | 20.0 |
| 2.7976 | 2.5161 | 156 | 2.5227 | 0.147 | 0.0499 | 0.1193 | 0.1192 | 20.0 |
| 2.9132 | 2.5484 | 158 | 2.5209 | 0.1473 | 0.0505 | 0.1198 | 0.1195 | 20.0 |
| 2.8024 | 2.5806 | 160 | 2.5191 | 0.1478 | 0.0503 | 0.1199 | 0.1198 | 20.0 |
| 2.5642 | 2.6129 | 162 | 2.5174 | 0.147 | 0.0498 | 0.1194 | 0.1192 | 20.0 |
| 2.6441 | 2.6452 | 164 | 2.5159 | 0.147 | 0.0492 | 0.1192 | 0.1191 | 20.0 |
| 2.817 | 2.6774 | 166 | 2.5144 | 0.147 | 0.0492 | 0.1194 | 0.1192 | 20.0 |
| 2.5755 | 2.7097 | 168 | 2.5130 | 0.148 | 0.05 | 0.1206 | 0.1205 | 20.0 |
| 2.8725 | 2.7419 | 170 | 2.5116 | 0.1486 | 0.0504 | 0.121 | 0.1209 | 20.0 |
| 2.5783 | 2.7742 | 172 | 2.5102 | 0.1481 | 0.05 | 0.1204 | 0.1202 | 20.0 |
| 2.7022 | 2.8065 | 174 | 2.5090 | 0.1481 | 0.0502 | 0.1204 | 0.1202 | 20.0 |
| 3.0013 | 2.8387 | 176 | 2.5078 | 0.1478 | 0.0502 | 0.12 | 0.1199 | 20.0 |
| 2.7448 | 2.8710 | 178 | 2.5066 | 0.1485 | 0.0509 | 0.1206 | 0.1203 | 20.0 |
| 2.907 | 2.9032 | 180 | 2.5055 | 0.1489 | 0.051 | 0.1208 | 0.1207 | 20.0 |
| 2.6482 | 2.9355 | 182 | 2.5044 | 0.149 | 0.0507 | 0.1209 | 0.1207 | 20.0 |
| 2.8286 | 2.9677 | 184 | 2.5034 | 0.1492 | 0.0506 | 0.1208 | 0.1206 | 20.0 |
| 2.8935 | 3.0 | 186 | 2.5024 | 0.1493 | 0.0506 | 0.1208 | 0.1205 | 20.0 |
| 2.8126 | 3.0323 | 188 | 2.5014 | 0.1497 | 0.0506 | 0.1209 | 0.1208 | 20.0 |
| 2.9074 | 3.0645 | 190 | 2.5003 | 0.1497 | 0.0506 | 0.1209 | 0.1208 | 20.0 |
| 2.6677 | 3.0968 | 192 | 2.4994 | 0.1506 | 0.0509 | 0.1216 | 0.1215 | 20.0 |
| 2.6578 | 3.1290 | 194 | 2.4984 | 0.1504 | 0.0506 | 0.1213 | 0.1211 | 20.0 |
| 2.74 | 3.1613 | 196 | 2.4975 | 0.1506 | 0.0509 | 0.1215 | 0.1213 | 20.0 |
| 2.9685 | 3.1935 | 198 | 2.4966 | 0.1503 | 0.051 | 0.1216 | 0.1214 | 20.0 |
| 2.6863 | 3.2258 | 200 | 2.4958 | 0.1503 | 0.051 | 0.1216 | 0.1214 | 20.0 |
| 2.8132 | 3.2581 | 202 | 2.4951 | 0.1507 | 0.0512 | 0.1221 | 0.1219 | 20.0 |
| 3.1448 | 3.2903 | 204 | 2.4945 | 0.1507 | 0.0512 | 0.1221 | 0.1219 | 20.0 |
| 2.5556 | 3.3226 | 206 | 2.4939 | 0.1505 | 0.0511 | 0.122 | 0.1217 | 20.0 |
| 2.7849 | 3.3548 | 208 | 2.4933 | 0.1506 | 0.0515 | 0.1222 | 0.122 | 20.0 |
| 2.6321 | 3.3871 | 210 | 2.4927 | 0.1507 | 0.0515 | 0.1224 | 0.1222 | 20.0 |
| 2.8026 | 3.4194 | 212 | 2.4922 | 0.1511 | 0.0517 | 0.1228 | 0.1226 | 20.0 |
| 2.6206 | 3.4516 | 214 | 2.4917 | 0.1511 | 0.0517 | 0.1228 | 0.1226 | 20.0 |
| 2.64 | 3.4839 | 216 | 2.4913 | 0.1516 | 0.0523 | 0.1233 | 0.1232 | 20.0 |
| 2.6653 | 3.5161 | 218 | 2.4908 | 0.1521 | 0.0531 | 0.1238 | 0.1236 | 20.0 |
| 2.5859 | 3.5484 | 220 | 2.4904 | 0.1521 | 0.0531 | 0.1238 | 0.1236 | 20.0 |
| 2.9226 | 3.5806 | 222 | 2.4900 | 0.1523 | 0.0532 | 0.1239 | 0.1237 | 20.0 |
| 2.932 | 3.6129 | 224 | 2.4896 | 0.1523 | 0.0532 | 0.1239 | 0.1237 | 20.0 |
| 2.9146 | 3.6452 | 226 | 2.4892 | 0.1525 | 0.0532 | 0.1243 | 0.124 | 20.0 |
| 2.697 | 3.6774 | 228 | 2.4889 | 0.1525 | 0.0532 | 0.1243 | 0.124 | 20.0 |
| 2.7723 | 3.7097 | 230 | 2.4886 | 0.1525 | 0.0532 | 0.1243 | 0.124 | 20.0 |
| 2.5864 | 3.7419 | 232 | 2.4883 | 0.1522 | 0.053 | 0.1241 | 0.1239 | 20.0 |
| 2.7527 | 3.7742 | 234 | 2.4880 | 0.1522 | 0.053 | 0.1241 | 0.1239 | 20.0 |
| 2.8521 | 3.8065 | 236 | 2.4878 | 0.1525 | 0.0532 | 0.1243 | 0.124 | 20.0 |
| 2.7859 | 3.8387 | 238 | 2.4876 | 0.1521 | 0.0529 | 0.1241 | 0.1239 | 20.0 |
| 2.7103 | 3.8710 | 240 | 2.4874 | 0.1525 | 0.053 | 0.1242 | 0.124 | 20.0 |
| 2.7256 | 3.9032 | 242 | 2.4873 | 0.1521 | 0.0529 | 0.1241 | 0.1239 | 20.0 |
| 2.6557 | 3.9355 | 244 | 2.4872 | 0.1525 | 0.053 | 0.1242 | 0.124 | 20.0 |
| 2.7129 | 3.9677 | 246 | 2.4871 | 0.1521 | 0.0529 | 0.1241 | 0.1239 | 20.0 |
| 2.7372 | 4.0 | 248 | 2.4871 | 0.1521 | 0.0529 | 0.1241 | 0.1239 | 20.0 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
bond005/whisper-podlodka-turbo
|
bond005
| 2025-08-11T06:25:57Z | 401 | 5 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"en",
"ru",
"dataset:mozilla-foundation/common_voice_17_0",
"dataset:bond005/rulibrispeech",
"dataset:bond005/podlodka_speech",
"dataset:bond005/sberdevices_golos_10h_crowd",
"dataset:bond005/sberdevices_golos_100h_farfield",
"dataset:bond005/taiga_speech_v2",
"dataset:bond005/audioset-nonspeech",
"arxiv:2212.04356",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-22T04:12:28Z |
---
library_name: transformers
license: apache-2.0
language:
- en
- ru
base_model:
- openai/whisper-large-v3-turbo
pipeline_tag: automatic-speech-recognition
datasets:
- mozilla-foundation/common_voice_17_0
- bond005/rulibrispeech
- bond005/podlodka_speech
- bond005/sberdevices_golos_10h_crowd
- bond005/sberdevices_golos_100h_farfield
- bond005/taiga_speech_v2
- bond005/audioset-nonspeech
metrics:
- wer
model-index:
- name: Whisper-Podlodka-Turbo by Ivan Bondarenko
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Podlodka Speech
type: bond005/podlodka_speech
args: ru
metrics:
- name: Test WER
type: wer
value: 7.81
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ru
type: mozilla-foundation/common_voice_11_0
args: ru
metrics:
- name: Test WER
type: wer
value: 5.22
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Sova RuDevices
type: bond005/sova_rudevices
args: ru
metrics:
- name: Test WER
type: wer
value: 15.26
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Russian Librispeech
type: bond005/rulibrispeech
args: ru
metrics:
- name: Test WER
type: wer
value: 9.61
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Sberdevices Golos (farfield)
type: bond005/sberdevices_golos_100h_farfield
args: ru
metrics:
- name: Test WER
type: wer
value: 11.26
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Sberdevices Golos (crowd)
type: bond005/sberdevices_golos_10h_crowd
args: ru
metrics:
- name: Test WER
type: wer
value: 11.82
---
# Whisper-Podlodka-Turbo
Whisper-Podlodka-Turbo is a new fine-tuned version of a [Whisper large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo). The main goal of the fine-tuning is to improve the quality of speech recognition and speech translation for Russian and English, as well as reduce the occurrence of hallucinations when processing non-speech audio signals.
## Model Description
**Whisper-Podlodka-Turbo** is a new fine-tuned version of [Whisper-Large-V3-Turbo](https://huggingface.co/openai/whisper-large-v3-turbo), optimized for high-quality Russian speech recognition with proper punctuation + capitalization and enhanced with noise resistance capability.
### Key Benefits
- 🎯 Improved Russian speech recognition quality compared to the base Whisper-Large-V3-Turbo model
- ✍️ Correct Russian punctuation and capitalization
- 🎧 Enhanced background noise resistance
- 🚫 Reduced number of hallucinations, especially in non-speech segments
### Supported Tasks
- Automatic Speech Recognition (ASR):
- 🇷🇺 Russian (primary focus)
- 🇬🇧 English
- Speech Translation:
- Russian ↔️ English
- Speech Language Detection (including non-speech detection)
## Uses
### Installation
**Whisper-Podlodka-Turbo** is supported in Hugging Face 🤗 [Transformers](https://huggingface.co/docs/transformers/index). To run the model, first install the Transformers library. For this example, we'll also install 🤗 Datasets to load toy audio dataset from the Hugging Face Hub, and 🤗 Accelerate to reduce the model loading time:
```bash
pip install --upgrade pip
pip install --upgrade transformers datasets[audio] accelerate
```
Also, I recommend using [`whisper-lid`](https://github.com/bond005/whisper-lid) for initial spoken language detection. Therefore, this library is also worth installing:
```bash
pip install --upgrade whisper-lid
```
### Usages Cases
#### Speech recognition
The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) class to transcribe audios of arbitrary language:
```python
import librosa # for loading sound from local file
from transformers import pipeline # for working with Whisper-Podlodka-Turbo
import wget # for downloading demo sound from its URL
from whisper_lid.whisper_lid import detect_language_in_speech # for spoken language detection
model_id = "bond005/whisper-podlodka-turbo" # the best Whisper model :-)
target_sampling_rate = 16_000 # Hz
asr = pipeline(model=model_id, device_map='auto', torch_dtype='auto')
# An example of speech recognition in Russian, spoken by a native speaker of this language
sound_ru_url = 'https://huggingface.co/bond005/whisper-podlodka-turbo/resolve/main/test_sound_ru.wav'
sound_ru_name = wget.download(sound_ru_url)
sound_ru = librosa.load(sound_ru_name, sr=target_sampling_rate, mono=True)[0]
print('Duration of sound with Russian speech = {0:.3f} seconds.'.format(
sound_ru.shape[0] / target_sampling_rate
))
detected_languages = detect_language_in_speech(
sound_ru,
asr.feature_extractor,
asr.tokenizer,
asr.model
)
print('Top-3 languages:')
lang_text_width = max([len(it[0]) for it in detected_languages])
for it in detected_languages[0:3]:
print(' {0:>{1}} {2:.4f}'.format(it[0], lang_text_width, it[1]))
recognition_result = asr(
sound_ru,
generate_kwargs={'task': 'transcribe', 'language': detected_languages[0][0]},
return_timestamps=False
)
print(recognition_result['text'] + '\n')
# An example of speech recognition in English, pronounced by a non-native speaker of that language with an accent
sound_en_url = 'https://huggingface.co/bond005/whisper-podlodka-turbo/resolve/main/test_sound_en.wav'
sound_en_name = wget.download(sound_en_url)
sound_en = librosa.load(sound_en_name, sr=target_sampling_rate, mono=True)[0]
print('Duration of sound with English speech = {0:.3f} seconds.'.format(
sound_en.shape[0] / target_sampling_rate
))
detected_languages = detect_language_in_speech(
sound_en,
asr.feature_extractor,
asr.tokenizer,
asr.model
)
print('Top-3 languages:')
lang_text_width = max([len(it[0]) for it in detected_languages])
for it in detected_languages[0:3]:
print(' {0:>{1}} {2:.4f}'.format(it[0], lang_text_width, it[1]))
recognition_result = asr(
sound_en,
generate_kwargs={'task': 'transcribe', 'language': detected_languages[0][0]},
return_timestamps=False
)
print(recognition_result['text'] + '\n')
```
As a result, you can see a text output like this:
```text
Duration of sound with Russian speech = 29.947 seconds.
Top-3 languages:
russian 0.9568
english 0.0372
ukrainian 0.0013
Ну, виспер сам по себе. Что такое виспер? Виспер — это уже полноценное end-to-end нейросетевое решение с авторегрессионным декодером, то есть это не чистый энкодер, как Wave2Vec, это не просто текстовый сек-то-сек, энкодер-декодер, как T5, это полноценный алгоритм преобразования речи в текст, где энкодер учитывает, прежде всего, акустические фичи речи, ну и семантика тоже постепенно подмешивается, а декодер — это уже языковая модель, которая генерирует токен за токеном.
Duration of sound with English speech = 20.247 seconds.
Top-3 languages:
english 0.9526
russian 0.0311
polish 0.0006
Ensembling can help us to solve a well-known bias-variance trade-off. We can decrease variance on basis of large ensemble, large ensemble of different algorithms.
```
#### Speech recognition with timestamps
In addition to the usual recognition, the model can also provide timestamps for recognized speech fragments:
```python
recognition_result = asr(
sound_ru,
generate_kwargs={'task': 'transcribe', 'language': 'russian',
return_timestamps=True
)
print('Recognized chunks of Russian speech:')
for it in recognition_result['chunks']:
print(f' {it}')
recognition_result = asr(
sound_en,
generate_kwargs={'task': 'transcribe', 'language': 'english',
return_timestamps=True
)
print('\nRecognized chunks of English speech:')
for it in recognition_result['chunks']:
print(f' {it}')
```
As a result, you can see a text output like this:
```text
Recognized chunks of Russian speech:
{'timestamp': (0.0, 4.8), 'text': 'Ну, виспер, сам по себе, что такое виспер. Виспер — это уже полноценное'}
{'timestamp': (4.8, 8.4), 'text': ' end-to-end нейросетевое решение с авторегрессионным декодером.'}
{'timestamp': (8.4, 10.88), 'text': ' То есть, это не чистый энкодер, как Wave2Vec.'}
{'timestamp': (10.88, 15.6), 'text': ' Это не просто текстовый сек-то-сек, энкодер-декодер, как T5.'}
{'timestamp': (15.6, 19.12), 'text': ' Это полноценный алгоритм преобразования речи в текст,'}
{'timestamp': (19.12, 23.54), 'text': ' где энкодер учитывает, прежде всего, акустические фичи речи,'}
{'timestamp': (23.54, 25.54), 'text': ' ну и семантика тоже постепенно подмешивается,'}
{'timestamp': (25.54, 29.94), 'text': ' а декодер — это уже языковая модель, которая генерирует токен за токеном.'}
Recognized chunks of English speech:
{'timestamp': (0.0, 8.08), 'text': 'Ensembling can help us to solve a well-known bias-variance trade-off.'}
{'timestamp': (8.96, 20.08), 'text': 'We can decrease variance on basis of large ensemble, large ensemble of different algorithms.'}
```
#### Voice activity detection (speech/non-speech)
Along with special language tokens, the model can also return the special token `<|nospeech|>`, if the input audio signal does not contain any speech (for details, see section 2.3 of the [corresponding paper about Whisper](https://arxiv.org/pdf/2212.04356)). This skill of the model forms the basis of the speech/non-speech classification algorithm, as demonstrated in the following example:
```python
nonspeech_sound_url = 'https://huggingface.co/bond005/whisper-podlodka-turbo/resolve/main/test_sound_nonspeech.wav'
nonspeech_sound_name = wget.download(nonspeech_sound_url)
nonspeech_sound = librosa.load(nonspeech_sound_name, sr=target_sampling_rate, mono=True)[0]
print('Duration of sound without speech = {0:.3f} seconds.'.format(
nonspeech_sound.shape[0] / target_sampling_rate
))
detected_languages = detect_language_in_speech(
nonspeech_sound,
asr.feature_extractor,
asr.tokenizer,
asr.model
)
print('Top-3 languages:')
lang_text_width = max([len(it[0]) for it in detected_languages])
for it in detected_languages[0:3]:
print(' {0:>{1}} {2:.4f}'.format(it[0], lang_text_width, it[1]))
```
As a result, you can see a text output like this:
```text
Duration of sound without speech = 10.000 seconds.
Top-3 languages:
NO SPEECH 0.9957
lingala 0.0002
cantonese 0.0002
```
#### Speech translation
In addition to the transcription task, the model also performs speech translation (although it translates better from Russian into English than from English into Russian):
```python
print(f'Speech translation from Russian to English:')
recognition_result = asr(
sound_ru,
generate_kwargs={'task': 'translate', 'language': 'english'},
return_timestamps=False
)
print(recognition_result['text'] + '\n')
print(f'Speech translation from English to Russian:')
recognition_result = asr(
sound_en,
generate_kwargs={'task': 'translate', 'language': 'russian'},
return_timestamps=False
)
print(recognition_result['text'] + '\n')
```
As a result, you can see a text output like this:
```text
Speech translation from Russian to English:
Well, Visper, what is Visper? Visper is already a complete end-to-end neural network with an autoregressive decoder. That is, it's not a pure encoder like Wave2Vec, it's not just a text-to-seq encoder-decoder like T5, it's a complete algorithm for the transformation of speech into text, where the encoder considers, first of all, acoustic features of speech, well, and the semantics are also gradually moving, and the decoder is already a language model that generates token by token.
Speech translation from English to Russian:
Энсемблинг может помочь нам осуществлять хорошо известный торговый байз-вариант. Мы можем ограничить варианты на основе крупного энсембла, крупного энсембла разных алгоритмов.
```
As you can see, in both examples the speech translation contains some errors, however in the example of translation from English to Russian these errors are more significant.
## Bias, Risks, and Limitations
- While improvements are observed for English and translation tasks, statistically significant advantages are confirmed only for Russian ASR
- The model's performance on [code-switching speech](https://en.wikipedia.org/wiki/Code-switching) (where speakers alternate between Russian and English within the same utterance) has not been specifically evaluated
- Inherits basic limitations of the Whisper architecture
## Training Details
### Training Data
The model was fine-tuned on a composite dataset including:
- [Common Voice](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0) (Ru, En)
- [Podlodka Speech](https://huggingface.co/datasets/bond005/podlodka_speech) (Ru)
- [Taiga Speech](https://huggingface.co/datasets/bond005/taiga_speech_v2) (Ru, synthetic)
- [Golos Farfield](https://huggingface.co/datasets/bond005/sberdevices_golos_100h_farfield) and [Golos Crowd](https://huggingface.co/datasets/bond005/sberdevices_golos_10h_crowd) (Ru)
- [Sova Rudevices](https://huggingface.co/datasets/bond005/sova_rudevices) (Ru)
- [Audioset](https://huggingface.co/datasets/bond005/audioset-nonspeech) (non-speech audio)
### Training Features
**1. Data Augmentation:**
- Dynamic mixing of speech with background noise and music
- Gradual reduction of signal-to-noise ratio during training
**2. Text Data Processing:**
- Russian text punctuation and capitalization restoration using [bond005/ruT5-ASR-large](https://huggingface.co/bond005/ruT5-ASR-large) (for speech sub-corpora without punctuated annotations)
- Parallel Russian-English text generation using [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
- Multi-stage validation of generated texts to minimize hallucinations using [bond005/xlm-roberta-xl-hallucination-detector](https://huggingface.co/bond005/xlm-roberta-xl-hallucination-detector)
**3. Training Strategy:**
- Progressive increase in training example complexity
- Balanced sampling between speech and non-speech data
- Special handling of language tokens and no-speech detection (`<|nospeech|>`)
## Evaluation
The experimental evaluation focused on two main tasks:
1. Russian speech recognition
2. Speech activity detection (binary classification "speech/non-speech")
Testing was performed on publicly available Russian speech corpora. Speech recognition was conducted using [the standard pipeline](https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline) from the Hugging Face 🤗 [Transformers library](https://huggingface.co/docs/transformers/index). Due to the limitations of this pipeline in language identification and non-speech detection (caused by a certain bug), the [whisper-lid](https://github.com/bond005/whisper-lid) library was used for speech presence/absence detection in the signal.
### Testing Data & Metrics
#### Testing Data
The quality of the Russian speech recognition task was tested on test sub-sets of six different datasets:
- [Common Voice 11 Ru](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0)
- [Podlodka Speech](https://huggingface.co/datasets/bond005/podlodka_speech)
- [Golos Farfield](https://huggingface.co/datasets/bond005/sberdevices_golos_100h_farfield)
- [Golos Crowd](https://huggingface.co/datasets/bond005/sberdevices_golos_10h_crowd)
- [Sova Rudevices](https://huggingface.co/datasets/bond005/sova_rudevices)
- [Russian Librispeech](https://huggingface.co/datasets/bond005/rulibrispeech)
The quality of the voice activity detection task was tested on test sub-sets of two different datasets:
- [noised version of Golos Crowd](https://huggingface.co/datasets/bond005/sberdevices_golos_10h_crowd_noised_2db) as a source of speech samples
- [filtered sub-set of Audioset corpus](https://huggingface.co/datasets/bond005/audioset-nonspeech) as a source of non-speech samples
Noise was added using [a special augmenter](https://github.com/dangrebenkin/audio_augmentator) capable of simulating the superposition of five different types of acoustic noise (reverberation, speech-like sounds, music, household sounds, and pet sounds) at a given signal-to-noise ratio (in this case, a signal-to-noise ratio of 2 dB was used).
The quality of the *robust* Russian speech recognition task was tested on test sub-set of above-mentioned [noised Golos Crowd](https://huggingface.co/datasets/bond005/sberdevices_golos_10h_crowd_noised_2db).
#### Metrics
**1. Modified [WER (Word Error Rate)](https://en.wikipedia.org/wiki/Word_error_rate)** for Russian speech recognition quality:
- Text normalization before WER calculation:
- Unification of numeral representations (digits/words)
- Standardization of foreign words (Cyrillic/Latin scripts)
- Accounting for valid transliteration variants
- Enables more accurate assessment of semantic recognition accuracy
- The lower the WER, the better the speech recognition quality
**2. [F1-score](https://en.wikipedia.org/wiki/F-score)** for speech activity detection:
- Binary classification "speech/non-speech"
- Evaluation of non-speech segment detection accuracy using `<|nospeech|>` token
- The higher the F1 score, the better the voice activity detection quality
### Results
#### Automatic Speech Recognition (ASR)
*Result (WER, %)*:
| Dataset | bond005/whisper-podlodka-turbo | openai/whisper-large-v3-turbo |
|----------------------------|--------------------------------|-------------------------------|
| bond005/podlodka_speech | 7.81 | 8.33 |
| rulibrispeech | 9.61 | 10.25 |
| sberdevices_golos_farfield | 11.26 | 20.12 |
| sberdevices_golos_crowd | 11.82 | 14.55 |
| sova_rudevices | 15.26 | 17.70 |
| common_voice_11_0 | 5.22 | 6.63 |
#### Voice Activity Detection (VAD)
*Result (F1)*:
| bond005/whisper-podlodka-turbo | openai/whisper-large-v3-turbo |
|--------------------------------|-------------------------------|
| 0.9214 | 0.8484 |
#### Robust ASR (SNR = 2 dB, speech-like noise, music, etc.)
*Result (WER, %)*:
| Dataset | bond005/whisper-podlodka-turbo | openai/whisper-large-v3-turbo |
|----------------------------------|--------------------------------|-------------------------------|
| sberdevices_golos_crowd (noised) | 46.14 | 75.20 |
## Citation
If you use this model in your work, please cite it as:
```bibtex
@misc{whisper-podlodka-turbo,
author = {Ivan Bondarenko},
title = {Whisper-Podlodka-Turbo: Enhanced Whisper Model for Russian ASR},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face Model Hub},
howpublished = {\url{https://huggingface.co/bond005/whisper-podlodka-turbo}}
}
```
|
be2be2/my_awesome_billsum_model
|
be2be2
| 2025-08-11T06:24:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T06:23:58Z |
---
library_name: transformers
license: apache-2.0
base_model: google-t5/t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4894
- Rouge1: 15.1559
- Rouge2: 5.226
- Rougel: 12.2378
- Rougelsum: 12.2239
- Gen Len: 2000.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 4.8246 | 0.0323 | 2 | 4.6334 | 14.4855 | 5.0231 | 12.1368 | 12.1254 | 2000.0 |
| 4.906 | 0.0645 | 4 | 4.5100 | 14.4329 | 4.9592 | 12.0945 | 12.1106 | 2000.0 |
| 4.8877 | 0.0968 | 6 | 4.3949 | 14.4597 | 4.881 | 12.1046 | 12.118 | 2000.0 |
| 4.7623 | 0.1290 | 8 | 4.1999 | 14.3667 | 4.8653 | 12.0412 | 12.0458 | 2000.0 |
| 4.5735 | 0.1613 | 10 | 4.0610 | 14.4551 | 4.8268 | 12.0111 | 12.029 | 2000.0 |
| 4.1697 | 0.1935 | 12 | 3.9348 | 14.459 | 4.8835 | 12.0186 | 12.0317 | 2000.0 |
| 3.9466 | 0.2258 | 14 | 3.7285 | 14.492 | 4.7994 | 12.0043 | 12.0001 | 2000.0 |
| 4.19 | 0.2581 | 16 | 3.6092 | 14.2935 | 4.6462 | 11.8568 | 11.8758 | 2000.0 |
| 3.7991 | 0.2903 | 18 | 3.5140 | 14.106 | 4.4777 | 11.7222 | 11.7161 | 2000.0 |
| 3.6421 | 0.3226 | 20 | 3.4145 | 14.0312 | 4.4049 | 11.6747 | 11.6684 | 2000.0 |
| 3.6484 | 0.3548 | 22 | 3.3426 | 14.1155 | 4.4771 | 11.7146 | 11.7067 | 2000.0 |
| 3.7566 | 0.3871 | 24 | 3.2824 | 14.0439 | 4.4101 | 11.6456 | 11.6398 | 2000.0 |
| 3.828 | 0.4194 | 26 | 3.2191 | 13.9501 | 4.3142 | 11.5649 | 11.558 | 2000.0 |
| 3.505 | 0.4516 | 28 | 3.1688 | 13.9209 | 4.2759 | 11.5698 | 11.5551 | 2000.0 |
| 3.467 | 0.4839 | 30 | 3.1304 | 13.816 | 4.1888 | 11.4883 | 11.4751 | 2000.0 |
| 3.2724 | 0.5161 | 32 | 3.0968 | 13.8275 | 4.1757 | 11.4918 | 11.4765 | 2000.0 |
| 3.1572 | 0.5484 | 34 | 3.0638 | 13.7565 | 4.1544 | 11.4169 | 11.4017 | 2000.0 |
| 3.3082 | 0.5806 | 36 | 3.0362 | 13.7676 | 4.1888 | 11.4009 | 11.3806 | 2000.0 |
| 3.2159 | 0.6129 | 38 | 3.0100 | 13.564 | 4.0787 | 11.2744 | 11.2455 | 2000.0 |
| 3.3438 | 0.6452 | 40 | 2.9825 | 13.4738 | 4.0005 | 11.1618 | 11.135 | 2000.0 |
| 3.2587 | 0.6774 | 42 | 2.9580 | 13.4158 | 4.0646 | 11.1112 | 11.0963 | 2000.0 |
| 3.0484 | 0.7097 | 44 | 2.9355 | 13.3027 | 4.0331 | 11.1227 | 11.1127 | 2000.0 |
| 3.1701 | 0.7419 | 46 | 2.9146 | 13.3935 | 4.035 | 11.1089 | 11.0943 | 2000.0 |
| 3.1144 | 0.7742 | 48 | 2.8945 | 13.2357 | 3.8709 | 10.9883 | 10.9698 | 2000.0 |
| 3.2611 | 0.8065 | 50 | 2.8756 | 13.3365 | 3.9707 | 11.0484 | 11.0462 | 2000.0 |
| 3.0423 | 0.8387 | 52 | 2.8575 | 13.3542 | 3.9953 | 11.0888 | 11.0779 | 2000.0 |
| 3.1193 | 0.8710 | 54 | 2.8405 | 13.3056 | 3.9113 | 11.118 | 11.099 | 2000.0 |
| 2.9974 | 0.9032 | 56 | 2.8248 | 13.3675 | 3.9264 | 11.1316 | 11.1148 | 2000.0 |
| 3.0579 | 0.9355 | 58 | 2.8102 | 13.371 | 3.945 | 11.1374 | 11.1279 | 2000.0 |
| 3.2434 | 0.9677 | 60 | 2.7964 | 13.1714 | 3.871 | 11.0051 | 10.9954 | 2000.0 |
| 2.9767 | 1.0 | 62 | 2.7832 | 13.0728 | 3.8094 | 10.9181 | 10.9149 | 2000.0 |
| 2.9854 | 1.0323 | 64 | 2.7704 | 12.9766 | 3.7579 | 10.8101 | 10.8082 | 2000.0 |
| 2.8919 | 1.0645 | 66 | 2.7586 | 13.0417 | 3.7537 | 10.824 | 10.8201 | 2000.0 |
| 2.9225 | 1.0968 | 68 | 2.7472 | 13.1607 | 3.8843 | 10.9298 | 10.9228 | 2000.0 |
| 3.173 | 1.1290 | 70 | 2.7363 | 13.0887 | 3.9032 | 10.8716 | 10.8608 | 2000.0 |
| 3.0448 | 1.1613 | 72 | 2.7258 | 13.1113 | 3.8846 | 10.8509 | 10.8413 | 2000.0 |
| 3.0989 | 1.1935 | 74 | 2.7156 | 13.2044 | 3.9782 | 10.9448 | 10.9398 | 2000.0 |
| 3.0072 | 1.2258 | 76 | 2.7057 | 13.272 | 4.0363 | 11.0001 | 10.9965 | 2000.0 |
| 2.7462 | 1.2581 | 78 | 2.6968 | 13.284 | 4.0337 | 10.9831 | 10.9815 | 2000.0 |
| 3.0383 | 1.2903 | 80 | 2.6879 | 13.3569 | 4.0058 | 10.9546 | 10.9505 | 2000.0 |
| 3.1326 | 1.3226 | 82 | 2.6793 | 13.4761 | 4.1349 | 11.0978 | 11.0816 | 2000.0 |
| 2.9859 | 1.3548 | 84 | 2.6710 | 13.3568 | 4.1278 | 11.0238 | 11.0165 | 2000.0 |
| 2.8721 | 1.3871 | 86 | 2.6630 | 13.321 | 4.1405 | 10.9662 | 10.9656 | 2000.0 |
| 2.996 | 1.4194 | 88 | 2.6555 | 13.4558 | 4.187 | 11.0321 | 11.0208 | 2000.0 |
| 2.9725 | 1.4516 | 90 | 2.6484 | 13.4779 | 4.1527 | 11.0813 | 11.0645 | 2000.0 |
| 3.0609 | 1.4839 | 92 | 2.6416 | 13.4159 | 4.1525 | 11.0169 | 11.0159 | 2000.0 |
| 2.7738 | 1.5161 | 94 | 2.6351 | 13.5566 | 4.2041 | 11.1207 | 11.1094 | 2000.0 |
| 2.9562 | 1.5484 | 96 | 2.6290 | 13.6845 | 4.313 | 11.2173 | 11.201 | 2000.0 |
| 2.6523 | 1.5806 | 98 | 2.6231 | 13.7239 | 4.3225 | 11.2591 | 11.2495 | 2000.0 |
| 3.0343 | 1.6129 | 100 | 2.6174 | 13.7076 | 4.2742 | 11.2433 | 11.2304 | 2000.0 |
| 2.7485 | 1.6452 | 102 | 2.6121 | 13.7974 | 4.3356 | 11.2775 | 11.2672 | 2000.0 |
| 2.9437 | 1.6774 | 104 | 2.6069 | 13.7932 | 4.3368 | 11.3156 | 11.2995 | 2000.0 |
| 2.8865 | 1.7097 | 106 | 2.6018 | 13.7692 | 4.3153 | 11.2896 | 11.27 | 2000.0 |
| 2.9826 | 1.7419 | 108 | 2.5967 | 13.8606 | 4.3539 | 11.3807 | 11.3579 | 2000.0 |
| 2.8272 | 1.7742 | 110 | 2.5918 | 13.8233 | 4.3525 | 11.3732 | 11.3524 | 2000.0 |
| 2.7165 | 1.8065 | 112 | 2.5874 | 13.7949 | 4.3456 | 11.3495 | 11.3293 | 2000.0 |
| 2.9133 | 1.8387 | 114 | 2.5833 | 13.7697 | 4.2713 | 11.2912 | 11.2696 | 2000.0 |
| 2.8366 | 1.8710 | 116 | 2.5795 | 13.8202 | 4.3674 | 11.366 | 11.3487 | 2000.0 |
| 2.8033 | 1.9032 | 118 | 2.5760 | 13.8181 | 4.4343 | 11.3883 | 11.3739 | 2000.0 |
| 2.8846 | 1.9355 | 120 | 2.5723 | 13.7795 | 4.368 | 11.3212 | 11.3145 | 2000.0 |
| 3.0411 | 1.9677 | 122 | 2.5688 | 13.7885 | 4.3801 | 11.3358 | 11.325 | 2000.0 |
| 2.931 | 2.0 | 124 | 2.5654 | 13.8741 | 4.3871 | 11.3962 | 11.3926 | 2000.0 |
| 2.7692 | 2.0323 | 126 | 2.5619 | 13.9234 | 4.3635 | 11.4122 | 11.4131 | 2000.0 |
| 2.576 | 2.0645 | 128 | 2.5588 | 14.0455 | 4.3772 | 11.4421 | 11.4408 | 2000.0 |
| 2.9965 | 2.0968 | 130 | 2.5559 | 14.1379 | 4.4182 | 11.5059 | 11.4938 | 2000.0 |
| 2.7233 | 2.1290 | 132 | 2.5532 | 14.1848 | 4.3899 | 11.5132 | 11.5076 | 2000.0 |
| 2.7718 | 2.1613 | 134 | 2.5507 | 14.2975 | 4.4565 | 11.5842 | 11.5739 | 2000.0 |
| 2.7089 | 2.1935 | 136 | 2.5482 | 14.3484 | 4.5523 | 11.6193 | 11.6119 | 2000.0 |
| 2.9317 | 2.2258 | 138 | 2.5457 | 14.3306 | 4.5679 | 11.5783 | 11.581 | 2000.0 |
| 2.8748 | 2.2581 | 140 | 2.5432 | 14.354 | 4.6003 | 11.6214 | 11.6195 | 2000.0 |
| 2.9315 | 2.2903 | 142 | 2.5407 | 14.4648 | 4.6567 | 11.7028 | 11.6888 | 2000.0 |
| 2.7498 | 2.3226 | 144 | 2.5383 | 14.5232 | 4.7442 | 11.7706 | 11.7634 | 2000.0 |
| 2.9018 | 2.3548 | 146 | 2.5358 | 14.5162 | 4.7371 | 11.7518 | 11.7461 | 2000.0 |
| 2.8626 | 2.3871 | 148 | 2.5332 | 14.5341 | 4.7496 | 11.7407 | 11.7339 | 2000.0 |
| 2.8584 | 2.4194 | 150 | 2.5309 | 14.5072 | 4.7626 | 11.7506 | 11.7441 | 2000.0 |
| 2.8144 | 2.4516 | 152 | 2.5288 | 14.5934 | 4.8165 | 11.7748 | 11.7664 | 2000.0 |
| 2.9953 | 2.4839 | 154 | 2.5268 | 14.6244 | 4.8584 | 11.8037 | 11.7946 | 2000.0 |
| 2.8001 | 2.5161 | 156 | 2.5249 | 14.6272 | 4.8834 | 11.798 | 11.7867 | 2000.0 |
| 2.9155 | 2.5484 | 158 | 2.5232 | 14.5808 | 4.8743 | 11.784 | 11.7721 | 2000.0 |
| 2.8051 | 2.5806 | 160 | 2.5215 | 14.6371 | 4.9178 | 11.8453 | 11.8353 | 2000.0 |
| 2.5662 | 2.6129 | 162 | 2.5199 | 14.6974 | 4.9668 | 11.8859 | 11.8727 | 2000.0 |
| 2.6469 | 2.6452 | 164 | 2.5184 | 14.6868 | 4.9259 | 11.8825 | 11.865 | 2000.0 |
| 2.8197 | 2.6774 | 166 | 2.5169 | 14.7867 | 4.9884 | 11.9872 | 11.9718 | 2000.0 |
| 2.5777 | 2.7097 | 168 | 2.5155 | 14.8429 | 5.0189 | 12.0169 | 12.0112 | 2000.0 |
| 2.8761 | 2.7419 | 170 | 2.5141 | 14.7896 | 4.9689 | 11.9929 | 11.9731 | 2000.0 |
| 2.5811 | 2.7742 | 172 | 2.5128 | 14.8042 | 4.9854 | 12.0156 | 11.9908 | 2000.0 |
| 2.7054 | 2.8065 | 174 | 2.5116 | 14.7848 | 4.9706 | 11.9896 | 11.9707 | 2000.0 |
| 3.0032 | 2.8387 | 176 | 2.5105 | 14.7583 | 4.9384 | 11.9507 | 11.9375 | 2000.0 |
| 2.7478 | 2.8710 | 178 | 2.5093 | 14.7583 | 4.9384 | 11.9507 | 11.9375 | 2000.0 |
| 2.9108 | 2.9032 | 180 | 2.5083 | 14.7757 | 4.9641 | 11.9403 | 11.9253 | 2000.0 |
| 2.6513 | 2.9355 | 182 | 2.5072 | 14.7844 | 4.9922 | 11.974 | 11.9511 | 2000.0 |
| 2.8323 | 2.9677 | 184 | 2.5061 | 14.7482 | 4.9533 | 11.9389 | 11.9192 | 2000.0 |
| 2.8963 | 3.0 | 186 | 2.5051 | 14.8324 | 5.0133 | 11.9974 | 11.9702 | 2000.0 |
| 2.815 | 3.0323 | 188 | 2.5041 | 14.8624 | 5.0289 | 12.0094 | 11.982 | 2000.0 |
| 2.9109 | 3.0645 | 190 | 2.5030 | 14.8735 | 5.0289 | 12.0258 | 11.995 | 2000.0 |
| 2.6712 | 3.0968 | 192 | 2.5021 | 14.9826 | 5.0544 | 12.088 | 12.0656 | 2000.0 |
| 2.6606 | 3.1290 | 194 | 2.5011 | 14.9826 | 5.0544 | 12.088 | 12.0656 | 2000.0 |
| 2.7432 | 3.1613 | 196 | 2.5002 | 14.9826 | 5.0544 | 12.088 | 12.0656 | 2000.0 |
| 2.9712 | 3.1935 | 198 | 2.4992 | 14.9826 | 5.0544 | 12.088 | 12.0656 | 2000.0 |
| 2.6893 | 3.2258 | 200 | 2.4985 | 14.9696 | 5.0281 | 12.0609 | 12.0404 | 2000.0 |
| 2.8161 | 3.2581 | 202 | 2.4977 | 14.9196 | 4.9833 | 12.0323 | 12.0162 | 2000.0 |
| 3.1472 | 3.2903 | 204 | 2.4969 | 14.9196 | 4.9833 | 12.0323 | 12.0162 | 2000.0 |
| 2.5583 | 3.3226 | 206 | 2.4963 | 14.9173 | 4.9915 | 12.0334 | 12.0144 | 2000.0 |
| 2.7874 | 3.3548 | 208 | 2.4956 | 14.9874 | 5.02 | 12.1013 | 12.0778 | 2000.0 |
| 2.6359 | 3.3871 | 210 | 2.4950 | 15.0208 | 5.0521 | 12.116 | 12.0974 | 2000.0 |
| 2.8058 | 3.4194 | 212 | 2.4945 | 14.9932 | 5.0521 | 12.0931 | 12.074 | 2000.0 |
| 2.6235 | 3.4516 | 214 | 2.4939 | 15.0197 | 5.0646 | 12.1154 | 12.0984 | 2000.0 |
| 2.6428 | 3.4839 | 216 | 2.4934 | 15.0643 | 5.1251 | 12.1614 | 12.146 | 2000.0 |
| 2.6676 | 3.5161 | 218 | 2.4929 | 15.0791 | 5.1583 | 12.1771 | 12.1619 | 2000.0 |
| 2.5883 | 3.5484 | 220 | 2.4925 | 15.099 | 5.1968 | 12.194 | 12.1806 | 2000.0 |
| 2.9245 | 3.5806 | 222 | 2.4921 | 15.0971 | 5.1976 | 12.2001 | 12.1868 | 2000.0 |
| 2.9351 | 3.6129 | 224 | 2.4917 | 15.0971 | 5.1976 | 12.2001 | 12.1868 | 2000.0 |
| 2.9175 | 3.6452 | 226 | 2.4913 | 15.0966 | 5.1916 | 12.1845 | 12.1757 | 2000.0 |
| 2.6997 | 3.6774 | 228 | 2.4910 | 15.0851 | 5.1622 | 12.1822 | 12.1717 | 2000.0 |
| 2.7747 | 3.7097 | 230 | 2.4907 | 15.0803 | 5.1485 | 12.1655 | 12.1563 | 2000.0 |
| 2.5892 | 3.7419 | 232 | 2.4904 | 15.0803 | 5.1485 | 12.1655 | 12.1563 | 2000.0 |
| 2.7554 | 3.7742 | 234 | 2.4902 | 15.0604 | 5.1485 | 12.1559 | 12.1488 | 2000.0 |
| 2.8548 | 3.8065 | 236 | 2.4900 | 15.1559 | 5.226 | 12.2378 | 12.2239 | 2000.0 |
| 2.7879 | 3.8387 | 238 | 2.4898 | 15.1559 | 5.226 | 12.2378 | 12.2239 | 2000.0 |
| 2.7142 | 3.8710 | 240 | 2.4896 | 15.1417 | 5.2106 | 12.2284 | 12.2173 | 2000.0 |
| 2.7282 | 3.9032 | 242 | 2.4895 | 15.1268 | 5.2071 | 12.2203 | 12.2101 | 2000.0 |
| 2.6589 | 3.9355 | 244 | 2.4894 | 15.1147 | 5.1913 | 12.2199 | 12.2097 | 2000.0 |
| 2.7158 | 3.9677 | 246 | 2.4894 | 15.1396 | 5.226 | 12.233 | 12.2144 | 2000.0 |
| 2.7397 | 4.0 | 248 | 2.4894 | 15.1559 | 5.226 | 12.2378 | 12.2239 | 2000.0 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
rmdhirr/llama-ins-236
|
rmdhirr
| 2025-08-11T06:22:42Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/llama-3.2-11b-vision-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"arxiv:1910.09700",
"region:us"
] |
text-generation
| 2025-08-11T06:18:51Z |
---
base_model: unsloth/llama-3.2-11b-vision-unsloth-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/llama-3.2-11b-vision-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
basimazam/safe-diffusion-guidance
|
basimazam
| 2025-08-11T06:19:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T04:29:13Z |
# Safe Diffusion Guidance (SDG)
Custom Diffusers pipeline that applies a mid-UNet safety classifier as guidance during denoising.
- Plug-and-play: works with any Stable Diffusion checkpoint (e.g., SD 1.5).
- No retraining needed; classifier runs on mid-UNet features.
- Tunable: `safety_scale`, `mid_fraction`, `safe_class_index`.
## Install
```bash
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
|
ramshafirdous/malaysian-address-corrector-lora
|
ramshafirdous
| 2025-08-11T06:14:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"peft",
"lora",
"qlora",
"address-normalization",
"address-correction",
"malaysia",
"text-generation",
"en",
"ms",
"base_model:openlm-research/open_llama_3b_v2",
"base_model:adapter:openlm-research/open_llama_3b_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T03:52:32Z |
---
license: apache-2.0
base_model: openlm-research/open_llama_3b_v2
library_name: transformers
pipeline_tag: text-generation
model_type: peft
adapter_type: lora
language: [en, ms]
tags: [peft, lora, qlora, address-normalization, address-correction, malaysia]
---
# Malaysian Address Corrector LoRA
This is a **LoRA adapter** for [`openlm-research/open_llama_3b_v2`](https://huggingface.co/openlm-research/open_llama_3b_v2) fine-tuned to **normalize and standardize Malaysian postal addresses**.
It expands common abbreviations, enforces consistent comma-separated formatting, and outputs **uppercase** standardized addresses.
⚠️ **Important:** This repo contains **adapters only** — you must load them on top of the base model. The Hosted Inference widget will not run adapters directly.
---
# Model Card for Model ID
This model is a LoRA-fine-tuned adapter built on top of OpenLLaMA 3B v2, specialized for Malaysian address correction. It:
Expands common local abbreviations (e.g., JLN → JALAN, TMN → TAMAN, WPKL → KUALA LUMPUR)
Normalizes spacing and adds commas, outputting addresses in a consistent, one-line, uppercase format
Formats addresses as [Address/Unit], [Street], [Locality/Area], [City], [Postcode], [State]
Runs efficiently on modest GPUs thanks to 4-bit quantization + LoRA, and supports easy batch or interactive usage
Ideal for developers needing clean, standardized Malaysian postal addresses for shipping labels, geocoding, or databases.
## Model Details
Base model: openlm-research/open_llama_3b_v2 (Apache-2.0).
Technique: QLoRA-style PEFT (LoRA on 4-bit base)
Intended users: Developers standardizing Malaysian postal addresses
## Uses
Correct and standardize Malaysian addresses in free-form text
Expand common abbreviations (e.g., JLN, TMN, LRG, WPKL)
Produce a single uppercase line suitable for label printing or geocoding prep
## Out-of-Scope Use
Non-Malaysian address formats
Entity verification/validation against authoritative sources
Geocoding / latitude-longitude lookup
## Bias, Risks & Limitations
Formatting assumptions: The model favors Malaysian conventions and may incorrectly reorder non-MY addresses.
Ambiguity: Abbreviations like HSN may map to multiple names; defaults are rule-based and may not match all cases.
Hallucination: The model can invent locality/state if the input is severely incomplete; keep a human in the loop for critical mailings.
## Recommendations
Keep a deterministic rule layer (abbrev expansion + uppercasing + simple postcode/state checks).
If you have authoritative reference lists (states, cities, postcodes), validate the final line before use.
## Training Details
Base model: openlm-research/open_llama_3b_v2
Method: LoRA fine-tuning with QLoRA (4-bit quantization)
Dataset: Synthetic + manually curated Malaysian address pairs (JSONL format: instruction, input, output)
Task: Causal LM, few-shot prompting with output delimiters <OUT>...</OUT>
Epochs: 2
Batch size: 2 (gradient accumulation 8)
LR: 2e-4 (cosine schedule, warmup 5%)
## How to use (LoRA adapter)
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
import torch, re
BASE = "openlm-research/open_llama_3b_v2"
ADAPTER = "ramshafirdous/malaysian-address-corrector-lora"
bnb = BitsAndBytesConfig(
load_in_4bit=True, bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.float16,
)
tok = AutoTokenizer.from_pretrained(BASE, use_fast=False)
if tok.pad_token_id is None: tok.pad_token = tok.eos_token
base = AutoModelForCausalLM.from_pretrained(BASE, quantization_config=bnb, device_map="auto", trust_remote_code=True)
model = PeftModel.from_pretrained(base, ADAPTER).eval()
def tidy_commas_upper(s):
s = re.sub(r"[\t|]+", ", ", s)
s = re.sub(r"\s*,\s*", ", ", s)
s = re.sub(r"\s{2,}", " ", s).strip()
return s.upper()
OUT_S, OUT_E = "<OUT>", "</OUT>"
FEWSHOT = (
"MALAYSIAN ADDRESS NORMALIZER.\n"
"EXPAND ABBREVIATIONS. ONE LINE. ALL CAPS.\n"
"FORMAT: [ADDRESS], [STREET], [LOCALITY], [CITY], [POSTCODE], [STATE]\n\n"
f"Input: 8 LRG ZAINAL ABIDIN 13 KAMPUNG PENDAMAR KLANG 41200 Selangor\n"
f"Output: {OUT_S}8, LORONG ZAINAL ABIDIN 13, KAMPUNG PENDAMAR, KLANG, 41200, SELANGOR{OUT_E}\n"
)
def correct_address(raw, max_new_tokens=128):
prompt = f"{FEWSHOT}\nInput: {raw}\nOutput: {OUT_S}"
enc = tok(prompt, return_tensors="pt", truncation=True, max_length=1024).to(model.device)
with torch.no_grad():
out = model.generate(**enc, max_new_tokens=max_new_tokens, do_sample=False,
repetition_penalty=1.05, eos_token_id=tok.eos_token_id,
pad_token_id=tok.pad_token_id)
txt = tok.decode(out[0], skip_special_tokens=True)
seg = txt.split(OUT_S, 1)[-1]
seg = seg.split(OUT_E, 1)[0] if OUT_E in seg else seg.split("\n", 1)[0]
return tidy_commas_upper(seg)
print(correct_address("11A, JALAN BU 11/14, BANDAR UTAMA PETALING JAYA 47800 Selangor"))
## Evaluation
Qualitative validation on held-out messy inputs:
| Input | Output |
| ------------------------------------------------------------------------ | ------------------------------------------------------------------------- |
| `11A, JALAN BU 11/14, BANDAR UTAMA PETALING JAYA 47800 Selangor` | `11A, JALAN BU 11/14, BANDAR UTAMA, PETALING JAYA, 47800, SELANGOR` |
| `LEVEL 30 THE GARDENS NORTH TOWER MID VALLEY CITY 59200 WP Kuala Lumpur` | `LEVEL 30, THE GARDENS NORTH TOWER, MID VALLEY CITY, 59200, KUALA LUMPUR` |
| `8 LRG ZAINAL ABIDIN 13 KAMPUNG PENDAMAR KLANG 41200 Selangor` | `8, LORONG ZAINAL ABIDIN 13, KAMPUNG PENDAMAR, KLANG, 41200, SELANGOR` |
## Abbreviation coverage
| Abbreviation | Expansion |
| ----------------------- | --------------------- |
| JLN | JALAN |
| TMN | TAMAN |
| LRG | LORONG |
| BDR | BANDAR |
| PJS | PETALING JAYA SELATAN |
| WPKL | KUALA LUMPUR |
| KPG | KAMPUNG |
| PLG | PULAU |
| BLK | BLOK |
| LEBUH RAYA / HWY / HWAY | LEBUH RAYA |
| ... | ... |
## Known Limitations
The model relies on prompt patterns — inconsistent prompting may reduce accuracy.
Does not validate postcode vs. state matches.
May occasionally insert or omit commas if input spacing is irregular (use a rule-based post-processor like tidy_commas_upper).
Trained for Malaysian addresses only.
Not for parsing addresses into structured fields.
Not a geocoder — it does not verify location existence.
## Model Card Authors
Author: Ramsha Firdous
|
lusxvr/nanoVLM
|
lusxvr
| 2025-08-11T06:10:51Z | 71 | 3 |
nanovlm
|
[
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-05-23T15:49:20Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("lusxvr/nanoVLM")
```
|
PrabalAryal/vits_0.0.2
|
PrabalAryal
| 2025-08-11T06:10:16Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2025-08-09T08:07:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lazysniper/Horiza-RAG-base-8b
|
Lazysniper
| 2025-08-11T06:06:05Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"text-generation-inference",
"Horiza",
"conversational",
"en",
"base_model:unsloth/gemma-3n-E2B-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3n-E2B-it-unsloth-bnb-4bit",
"license:gemma",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-07T08:31:40Z |
---
base_model:
- unsloth/gemma-3n-E2B-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- gemma3n
- Horiza
license: gemma
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Lazysniper
- **License:** Gemma terms of use
- **Finetuned from model :** unsloth/gemma-3n-e2b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yuchiahung/invoice_extraction_donut_fromv0_f21_ep3_0724_edit_distance_edit_dist_loss_100
|
yuchiahung
| 2025-08-11T06:04:48Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-08-11T03:53:37Z |
---
library_name: transformers
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
model-index:
- name: invoice_extraction_donut_fromv0_f21_ep3_0724_edit_distance_edit_dist_loss_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# invoice_extraction_donut_fromv0_f21_ep3_0724_edit_distance_edit_dist_loss_100
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 14.5138
- Char Accuracy: 1.0
- Exact Match Accuracy: 1.0
- Avg Pred Length: 7.0
- Avg Label Length: 7.0
- Length Ratio: 1.0
- Avg Edit Distance: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 100
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Char Accuracy | Exact Match Accuracy | Avg Pred Length | Avg Label Length | Length Ratio | Avg Edit Distance |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|:--------------------:|:---------------:|:----------------:|:------------:|:-----------------:|
| 15.2872 | 1.0 | 1496 | 11.4125 | 0.9957 | 0.9940 | 6.9698 | 7.0 | 0.9957 | 0.0302 |
| 11.8103 | 2.0 | 2992 | 13.7640 | 1.0 | 1.0 | 7.0 | 7.0 | 1.0 | 0.0 |
| 9.4386 | 3.0 | 4488 | 14.5138 | 1.0 | 1.0 | 7.0 | 7.0 | 1.0 | 0.0 |
### Framework versions
- Transformers 4.53.2
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
roeker/blockassist-bc-quick_wiry_owl_1754892132
|
roeker
| 2025-08-11T06:03:11Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T06:03:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754891061
|
Sayemahsjn
| 2025-08-11T06:02:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T06:02:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
alexgeezy429/blockassist-bc-scented_coiled_antelope_1754890010
|
alexgeezy429
| 2025-08-11T06:01:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scented coiled antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T06:01:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scented coiled antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
esi777/blockassist-bc-camouflaged_trotting_eel_1754891999
|
esi777
| 2025-08-11T06:01:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T06:00:49Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
w401523/w
|
w401523
| 2025-08-11T06:00:43Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-08-11T06:00:43Z |
---
license: bigscience-openrail-m
---
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754891863
|
IvanJAjebu
| 2025-08-11T05:59:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:58:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hafizd/qwen3-4b-toxic-v0
|
hafizd
| 2025-08-11T05:52:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated",
"base_model:finetune:huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T05:51:46Z |
---
base_model: huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hafizd
- **License:** apache-2.0
- **Finetuned from model :** huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
MickHester/the-silver-pigs-finetuned
|
MickHester
| 2025-08-11T05:51:11Z | 0 | 0 | null |
[
"generated_from_trainer",
"audiobook",
"fine-tuned",
"text-generation",
"lora",
"axolotl",
"en",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-11T03:57:51Z |
---
license: apache-2.0
base_model: teknium/OpenHermes-2.5-Mistral-7B
tags:
- generated_from_trainer
- audiobook
- fine-tuned
- text-generation
- lora
- axolotl
language:
- en
pipeline_tag: text-generation
widget:
- text: "Who is the main character in the story?"
- text: "Describe the setting of the book."
- text: "What are the main themes explored?"
---
# the-silver-pigs-finetuned
This model was fine-tuned on audiobook content using the AudioBook Visualizer application.
## Model Details
- **Base Model**: teknium/OpenHermes-2.5-Mistral-7B
- **Fine-tuning Method**: QLoRA (4-bit quantization with LoRA adapters)
- **Training Framework**: Axolotl
- **Training Infrastructure**: RunPod Serverless
- **Upload Date**: 2025-08-11T05:51:10.991Z
## Training Configuration
- **LoRA Rank**: 32
- **LoRA Alpha**: 16
- **LoRA Dropout**: 0.05
- **Target Modules**: q_proj, v_proj, k_proj, o_proj
- **Learning Rate**: 2e-4
- **Batch Size**: 2
- **Gradient Accumulation**: 4
- **Training Duration**: ~3 minutes
## Usage with vLLM
Deploy on RunPod serverless with these environment variables:
```json
{
"MODEL_NAME": "the-silver-pigs-finetuned",
"TRUST_REMOTE_CODE": "true",
"MAX_MODEL_LEN": "2048",
"DTYPE": "float16",
"ENABLE_LORA": "true"
}
```
## Usage with Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = "teknium/OpenHermes-2.5-Mistral-7B"
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.float16,
device_map="auto"
)
model = PeftModel.from_pretrained(model, "the-silver-pigs-finetuned")
tokenizer = AutoTokenizer.from_pretrained("the-silver-pigs-finetuned")
# Query the model
prompt = "Tell me about the main character."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Training Data
This model was fine-tuned on audiobook transcript data, processed into ~490 question-answer pairs covering:
- Character descriptions and relationships
- Plot events and summaries
- Dialogue and memorable quotes
- Settings and world-building details
## Limitations
- Model knowledge is limited to the specific audiobook content
- May generate plausible but incorrect details
- Performance on general tasks may differ from base model
## Created With
[AudioBook Visualizer](https://github.com/yourusername/audiobook-visualizer) - An application for fine-tuning LLMs on audiobook content.
---
*Original model path: /runpod-volume/fine-tuning/training-1754891374667*
|
zhw-e8/LAMAR
|
zhw-e8
| 2025-08-11T05:49:32Z | 0 | 0 | null |
[
"safetensors",
"biology",
"doi:10.57967/hf/6198",
"license:mit",
"region:us"
] | null | 2024-10-15T02:55:31Z |
---
license: mit
tags:
- biology
---
# LAMAR
LAMAR is a Foundation **La**nguage **M**odel for RN**A** **R**egulation, which achieves better or comparable performance compared to baseline models in various
RNA regulation tasks, helping to decipher the rules of RNA regulation. LAMAR was developed by Rnasys Lab and Bio-Med Big Data Center, Shanghai Institute of Nutrition
and Health (SINH), Chinese Academy of Sciences (CAS).
This repository contains pretrained and fine-tuned weights for RNA foundation language model **LAMAR**.

## Scripts
The scripts for pretraining and fine-tuning LAMAR are deposited in Github (https://github.com/zhw-e8/LAMAR).
## Model weights
LAMAR is pretrained on approximately 15 million sequences from both genome and transcriptome of 225 mammals and 1569 viruses, and further fine-tuned with labeled
datasets for various tasks. Considering the sequence length of genes/transcripts and the available computational resources, we pretrain two models with the contextual
length of up to 2048 and 4096 tokens, named LAMAR-2k and LAMAR-4k.
* mammalian80D_2048len1mer1sw_80M: Pretrained weights of LAMAR-2k
* mammalian80D_4096len1mer1sw_80M: Pretrained weights of LAMAR-4k
LAMAR is fine-tuned to predict the splice site, mRNA translation efficiency, mRNA degradation rate and internal ribosome entry site (IRES).
* SpliceSitePred: Weight of fine-tuned LAMAR predict splice site of pre-mRNA
* UTR5TEPred: Weight of fine-tuned LAMAR predict translation efficiency of mRNA based on 5' UTR
* UTR3DegPred: Weight of fine-tuned LAMAR predict degradation rate of mRNA based on 3' UTR
* IRESPred: Weight of fine-tuned LAMAR predicting internal ribosome entry site (IRES)
## Citation
https://www.biorxiv.org/content/10.1101/2024.10.12.617732v2
|
ChaosMon/corgy_Cat_LoRA
|
ChaosMon
| 2025-08-11T05:49:16Z | 33 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-08-06T08:03:27Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of TOK Cat
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - ChaosMon/corgy_Cat_LoRA
<Gallery />
## Model description
These are ChaosMon/corgy_Cat_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK Cat to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](ChaosMon/corgy_Cat_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754891216
|
IvanJAjebu
| 2025-08-11T05:48:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:47:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754890942
|
kapalbalap
| 2025-08-11T05:43:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:43:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1754889452
|
koloni
| 2025-08-11T05:43:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:42:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DevQuasar/Locutusque.StockQwen-2.5-7B-GGUF
|
DevQuasar
| 2025-08-11T05:43:03Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:Locutusque/StockQwen-2.5-7B",
"base_model:quantized:Locutusque/StockQwen-2.5-7B",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-11T04:37:04Z |
---
base_model:
- Locutusque/StockQwen-2.5-7B
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [Locutusque/StockQwen-2.5-7B](https://huggingface.co/Locutusque/StockQwen-2.5-7B)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
roeker/blockassist-bc-quick_wiry_owl_1754890653
|
roeker
| 2025-08-11T05:39:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:38:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754890645
|
ggozzy
| 2025-08-11T05:38:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:38:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jinyu220/gaze_model_av_aloha_real_put_coinv3_new
|
Jinyu220
| 2025-08-11T05:37:39Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-08-11T05:37:27Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
esi777/blockassist-bc-camouflaged_trotting_eel_1754890476
|
esi777
| 2025-08-11T05:35:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged trotting eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:35:20Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged trotting eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jungsanghyun/vits-small.ko2
|
jungsanghyun
| 2025-08-11T05:35:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T05:34:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Reo10/test01-policy
|
Reo10
| 2025-08-11T05:34:23Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Reo10/test01",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-11T05:16:31Z |
---
datasets: Reo10/test01
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- lerobot
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Ferdi3425/blockassist-bc-amphibious_deadly_otter_1754890260
|
Ferdi3425
| 2025-08-11T05:33:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deadly otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:32:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deadly otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1754888546
|
milliarderdol
| 2025-08-11T05:29:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:29:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Ww1313w/my_multi_smolvla_4
|
Ww1313w
| 2025-08-11T05:29:20Z | 10 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-08-07T03:23:13Z |
# ReadMe
## My target
I'd like to train a model which works for the two tasks: transfer_cube and insertion.
## Some Problem
The model sometimes can do the insertion well, but can't transfer cube at all.
## Train to get this model, run the code:
```bash
python -m lerobot.scripts.train
--policy.path=lerobot/smolvla_base
--dataset.repo_id=Ww1313w/TransferCube_Insertion
--steps=20000
--output_dir=outputs/train/my_multi_smolvla
--policy.push_to_hub=false
--wandb.enable=true
```
|
ershop/blockassist-bc-mangy_pudgy_manatee_1754890046
|
ershop
| 2025-08-11T05:28:38Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mangy pudgy manatee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:28:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mangy pudgy manatee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rayon12/blockassist-bc-prowling_prowling_rabbit_1754889688
|
rayon12
| 2025-08-11T05:23:33Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"prowling prowling rabbit",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:23:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- prowling prowling rabbit
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tuantranmlv/contractbert_dichvu_clause_key_a
|
tuantranmlv
| 2025-08-11T05:22:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T05:22:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754889542
|
IvanJAjebu
| 2025-08-11T05:20:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:20:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
chevda/finetuned-gpt2-lb-genai
|
chevda
| 2025-08-11T05:19:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T05:19:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nlee-208/limo_S-dsr1b_T-q32b_50
|
nlee-208
| 2025-08-11T05:19:23Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T04:17:23Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: transformers
model_name: limo_S-dsr1b_T-q32b_50
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for limo_S-dsr1b_T-q32b_50
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nlee-208/limo_S-dsr1b_T-q32b_50", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nlee28/cross1/runs/9ii87kbp)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.3
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Skywork/SkyReels-V2-I2V-14B-720P-Diffusers
|
Skywork
| 2025-08-11T05:16:14Z | 0 | 3 |
diffusers
|
[
"diffusers",
"safetensors",
"video",
"video generation",
"image-to-video",
"en",
"arxiv:2504.13074",
"arxiv:2407.01392",
"license:other",
"diffusers:SkyReelsV2ImageToVideoPipeline",
"region:us"
] |
image-to-video
| 2025-06-14T07:58:15Z |
---
license: other
license_name: skywork-license
license_link: LICENSE
pipeline_tag: image-to-video
library_name: diffusers
tags:
- video
- video generation
language:
- en
---
<p align="center">
<img src="assets/logo2.png" alt="SkyReels Logo" width="50%">
</p>
<h1 align="center">SkyReels V2: Infinite-Length Film Generative Model</h1>
<p align="center">
📑 <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a> · 👋 <a href="https://www.skyreels.ai/home?utm_campaign=huggingface_skyreels_v2" target="_blank">Playground</a> · 💬 <a href="https://discord.gg/PwM6NYtccQ" target="_blank">Discord</a> · 🤗 <a href="https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9" target="_blank">Hugging Face</a> · 🤖 <a href="https://www.modelscope.cn/collections/SkyReels-V2-f665650130b144" target="_blank">ModelScope</a> · 🌐 <a href="https://github.com/SkyworkAI/SkyReels-V2" target="_blank">GitHub</a>
</p>
---
Welcome to the **SkyReels V2** repository! Here, you'll find the model weights for our infinite-length film generative models. To the best of our knowledge, it represents the first open-source video generative model employing **AutoRegressive Diffusion-Forcing architecture** that achieves the **SOTA performance** among publicly available models.
## 🔥🔥🔥 News!!
* Apr 24, 2025: 🔥 We release the 720P models, [SkyReels-V2-DF-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P) and [SkyReels-V2-I2V-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P). The former facilitates infinite-length autoregressive video generation, and the latter focuses on Image2Video synthesis.
* Apr 21, 2025: 👋 We release the inference code and model weights of [SkyReels-V2](https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9) Series Models and the video captioning model [SkyCaptioner-V1](https://huggingface.co/Skywork/SkyCaptioner-V1) .
* Apr 3, 2025: 🔥 We also release [SkyReels-A2](https://github.com/SkyworkAI/SkyReels-A2). This is an open-sourced controllable video generation framework capable of assembling arbitrary visual elements.
* Feb 18, 2025: 🔥 we released [SkyReels-A1](https://github.com/SkyworkAI/SkyReels-A1). This is an open-sourced and effective framework for portrait image animation.
* Feb 18, 2025: 🔥 We released [SkyReels-V1](https://github.com/SkyworkAI/SkyReels-V1). This is the first and most advanced open-source human-centric video foundation model.
## 🎥 Demos
<table>
<tr>
<td align="center">
<video src="https://github.com/user-attachments/assets/f6f9f9a7-5d5f-433c-9d73-d8d593b7ad25" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/0eb13415-f4d9-4aaf-bcd3-3031851109b9" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/dcd16603-5bf4-4786-8e4d-1ed23889d07a" width="100%"></video>
</td>
</tr>
</table>
The demos above showcase 30-second videos generated using our SkyReels-V2 Diffusion Forcing model.
## 📑 TODO List
- [x] <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a>
- [x] Checkpoints of the 14B and 1.3B Models Series
- [x] Single-GPU & Multi-GPU Inference Code
- [x] <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a>: A Video Captioning Model
- [x] Prompt Enhancer
- [x] Diffusers integration
- [ ] Checkpoints of the 5B Models Series
- [ ] Checkpoints of the Camera Director Models
- [ ] Checkpoints of the Step & Guidance Distill Model
## 🚀 Quickstart
Wan can run directly using 🤗 Diffusers!
```py
# pip install ftfy
import numpy as np
import torch
from diffusers import AutoModel, SkyReelsV2ImageToVideoPipeline, UniPCMultistepScheduler
from diffusers.utils import export_to_video, load_image
model_id = "Skywork/SkyReels-V2-I2V-14B-720P-Diffusers"
vae = AutoModel.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipeline = SkyReelsV2ImageToVideoPipeline.from_pretrained(
model_id,
vae=vae,
torch_dtype=torch.bfloat16
)
flow_shift = 8.0 # 8.0 for T2V, 5.0 for I2V
pipeline.scheduler = UniPCMultistepScheduler.from_config(pipeline.scheduler.config, flow_shift=flow_shift)
pipeline = pipeline.to("cuda")
first_frame = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/flf2v_input_first_frame.png")
def aspect_ratio_resize(image, pipeline, max_area=720 * 1280):
aspect_ratio = image.height / image.width
mod_value = pipeline.vae_scale_factor_spatial * pipeline.transformer.config.patch_size[1]
height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value
width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value
image = image.resize((width, height))
return image, height, width
def center_crop_resize(image, height, width):
# Calculate resize ratio to match first frame dimensions
resize_ratio = max(width / image.width, height / image.height)
# Resize the image
width = round(image.width * resize_ratio)
height = round(image.height * resize_ratio)
size = [width, height]
image = TF.center_crop(image, size)
return image, height, width
first_frame, height, width = aspect_ratio_resize(first_frame, pipeline)
prompt = "CG animation style, a small blue bird takes off from the ground, flapping its wings. The bird's feathers are delicate, with a unique pattern on its chest. The background shows a blue sky with white clouds under bright sunshine. The camera follows the bird upward, capturing its flight and the vastness of the sky from a close-up, low-angle perspective."
output = pipeline(
image=first_frame,
guidance_scale=5.0
prompt=prompt,
num_inference_steps=50,
height=544, # 720 for 720P
width=960, # 1280 for 720P
num_frames=97,
).frames[0]
export_to_video(output, "video.mp4", fps=24, quality=8)
```
#### Installation
```shell
# clone the repository.
git clone https://github.com/SkyworkAI/SkyReels-V2
cd SkyReels-V2
# Install dependencies. Test environment uses Python 3.10.12.
pip install -r requirements.txt
```
#### Model Download
You can download our models from Hugging Face:
<table>
<thead>
<tr>
<th>Type</th>
<th>Model Variant</th>
<th>Recommended Height/Width/Frame</th>
<th>Link</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="5">Diffusion Forcing</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Text-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Image-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="3">Camera Director</td>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
</tbody>
</table>
After downloading, set the model path in your generation commands:
#### Single GPU Inference
- **Diffusion Forcing for Long Video Generation**
The <a href="https://arxiv.org/abs/2407.01392">**Diffusion Forcing**</a> version model allows us to generate Infinite-Length videos. This model supports both **text-to-video (T2V)** and **image-to-video (I2V)** tasks, and it can perform inference in both synchronous and asynchronous modes. Here we demonstrate 2 running scripts as examples for long video generation. If you want to adjust the inference parameters, e.g., the duration of video, inference mode, read the Note below first.
synchronous generation for 10s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# synchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
asynchronous generation for 30s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# asynchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 5 \
--causal_block_size 5 \
--base_num_frames 97 \
--num_frames 737 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload
```
> **Note**:
> - If you want to run the **image-to-video (I2V)** task, add `--image ${image_path}` to your command and it is also better to use **text-to-video (T2V)**-like prompt which includes some descriptions of the first-frame image.
> - For long video generation, you can just switch the `--num_frames`, e.g., `--num_frames 257` for 10s video, `--num_frames 377` for 15s video, `--num_frames 737` for 30s video, `--num_frames 1457` for 60s video. The number is not strictly aligned with the logical frame number for specified time duration, but it is aligned with some training parameters, which means it may perform better. When you use asynchronous inference with causal_block_size > 1, the `--num_frames` should be carefully set.
> - You can use `--ar_step 5` to enable asynchronous inference. When asynchronous inference, `--causal_block_size 5` is recommended while it is not supposed to be set for synchronous generation. REMEMBER that the frame latent number inputted into the model in every iteration, e.g., base frame latent number (e.g., (97-1)//4+1=25 for base_num_frames=97) and (e.g., (237-97-(97-17)x1+17-1)//4+1=20 for base_num_frames=97, num_frames=237, overlap_history=17) for the last iteration, MUST be divided by causal_block_size. If you find it too hard to calculate and set proper values, just use our recommended setting above :). Asynchronous inference will take more steps to diffuse the whole sequence which means it will be SLOWER than synchronous mode. In our experiments, asynchronous inference may improve the instruction following and visual consistent performance.
> - To reduce peak VRAM, just lower the `--base_num_frames`, e.g., to 77 or 57, while keeping the same generative length `--num_frames` you want to generate. This may slightly reduce video quality, and it should not be set too small.
> - `--addnoise_condition` is used to help smooth the long video generation by adding some noise to the clean condition. Too large noise can cause the inconsistency as well. 20 is a recommended value, and you may try larger ones, but it is recommended to not exceed 50.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 51.2GB peak VRAM.
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
python3 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 43.4GB peak VRAM.
- **Prompt Enhancer**
The prompt enhancer is implemented based on <a href="https://huggingface.co/Qwen/Qwen2.5-32B-Instruct">Qwen2.5-32B-Instruct</a> and is utilized via the `--prompt_enhancer` parameter. It works ideally for short prompts, while for long prompts, it might generate an excessively lengthy prompt that could lead to over-saturation in the generative video. Note the peak memory of GPU is 64G+ if you use `--prompt_enhancer`. If you want to obtain the enhanced prompt separately, you can also run the prompt_enhancer script separately for testing. The steps are as follows:
```shell
cd skyreels_v2_infer/pipelines
python3 prompt_enhancer.py --prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface."
```
> **Note**:
> - `--prompt_enhancer` is not allowed if using `--use_usp`. We recommend running the skyreels_v2_infer/pipelines/prompt_enhancer.py script first to generate enhanced prompt before enabling the `--use_usp` parameter.
**Advanced Configuration Options**
Below are the key parameters you can customize for video generation:
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --prompt | | Text description for generating your video |
| --image | | Path to input image for image-to-video generation |
| --resolution | 540P or 720P | Output video resolution (select based on model type) |
| --num_frames | 97 or 121 | Total frames to generate (**97 for 540P models**, **121 for 720P models**) |
| --inference_steps | 50 | Number of denoising steps |
| --fps | 24 | Frames per second in the output video |
| --shift | 8.0 or 5.0 | Flow matching scheduler parameter (**8.0 for T2V**, **5.0 for I2V**) |
| --guidance_scale | 6.0 or 5.0 | Controls text adherence strength (**6.0 for T2V**, **5.0 for I2V**) |
| --seed | | Fixed seed for reproducible results (omit for random generation) |
| --offload | True | Offloads model components to CPU to reduce VRAM usage (recommended) |
| --use_usp | True | Enables multi-GPU acceleration with xDiT USP |
| --outdir | ./video_out | Directory where generated videos will be saved |
| --prompt_enhancer | True | Expand the prompt into a more detailed description |
| --teacache | False | Enables teacache for faster inference |
| --teacache_thresh | 0.2 | Higher speedup will cause to worse quality |
| --use_ret_steps | False | Retention Steps for teacache |
**Diffusion Forcing Additional Parameters**
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --ar_step | 0 | Controls asynchronous inference (0 for synchronous mode) |
| --base_num_frames | 97 or 121 | Base frame count (**97 for 540P**, **121 for 720P**) |
| --overlap_history | 17 | Number of frames to overlap for smooth transitions in long videos |
| --addnoise_condition | 20 | Improves consistency in long video generation |
| --causal_block_size | 5 | Recommended when using asynchronous inference (--ar_step > 0) |
#### Multi-GPU inference using xDiT USP
We use [xDiT](https://github.com/xdit-project/xDiT) USP to accelerate inference. For example, to generate a video with 2 GPUs, you can use the following command:
- **Diffusion Forcing**
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# diffusion forcing synchronous inference
torchrun --nproc_per_node=2 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--use_usp \
--offload \
--seed 42
```
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
torchrun --nproc_per_node=2 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--offload \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--use_usp \
--seed 42
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
## Contents
- [Abstract](#abstract)
- [Methodology of SkyReels-V2](#methodology-of-skyreels-v2)
- [Key Contributions of SkyReels-V2](#key-contributions-of-skyreels-v2)
- [Video Captioner](#video-captioner)
- [Reinforcement Learning](#reinforcement-learning)
- [Diffusion Forcing](#diffusion-forcing)
- [High-Quality Supervised Fine-Tuning(SFT)](#high-quality-supervised-fine-tuning-sft)
- [Performance](#performance)
- [Acknowledgements](#acknowledgements)
- [Citation](#citation)
---
## Abstract
Recent advances in video generation have been driven by diffusion models and autoregressive frameworks, yet critical challenges persist in harmonizing prompt adherence, visual quality, motion dynamics, and duration: compromises in motion dynamics to enhance temporal visual quality, constrained video duration (5-10 seconds) to prioritize resolution, and inadequate shot-aware generation stemming from general-purpose MLLMs' inability to interpret cinematic grammar, such as shot composition, actor expressions, and camera motions. These intertwined limitations hinder realistic long-form synthesis and professional film-style generation.
To address these limitations, we introduce SkyReels-V2, the world's first infinite-length film generative model using a Diffusion Forcing framework. Our approach synergizes Multi-modal Large Language Models (MLLM), Multi-stage Pretraining, Reinforcement Learning, and Diffusion Forcing techniques to achieve comprehensive optimization. Beyond its technical innovations, SkyReels-V2 enables multiple practical applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and multi-subject consistent video generation through our <a href="https://github.com/SkyworkAI/SkyReels-A2">Skyreels-A2</a> system.
## Methodology of SkyReels-V2
The SkyReels-V2 methodology consists of several interconnected components. It starts with a comprehensive data processing pipeline that prepares various quality training data. At its core is the Video Captioner architecture, which provides detailed annotations for video content. The system employs a multi-task pretraining strategy to build fundamental video generation capabilities. Post-training optimization includes Reinforcement Learning to enhance motion quality, Diffusion Forcing Training for generating extended videos, and High-quality Supervised Fine-Tuning (SFT) stages for visual refinement. The model runs on optimized computational infrastructure for efficient training and inference. SkyReels-V2 supports multiple applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and Elements-to-Video Generation.
<p align="center">
<img src="assets/main_pipeline.jpg" alt="mainpipeline" width="100%">
</p>
## Key Contributions of SkyReels-V2
#### Video Captioner
<a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> serves as our video captioning model for data annotation. This model is trained on the captioning result from the base model <a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Instruct</a> and the sub-expert captioners on a balanced video data. The balanced video data is a carefully curated dataset of approximately 2 million videos to ensure conceptual balance and annotation quality. Built upon the <a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Instruct</a> foundation model, <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> is fine-tuned to enhance performance in domain-specific video captioning tasks. To compare the performance with the SOTA models, we conducted a manual assessment of accuracy across different captioning fields using a test set of 1,000 samples. The proposed <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> achieves the highest average accuracy among the baseline models, and show a dramatic result in the shot related fields
<p align="center">
<table align="center">
<thead>
<tr>
<th>model</th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Ins.</a></th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Ins.</a></th>
<th><a href="https://huggingface.co/omni-research/Tarsier2-Recap-7b">Tarsier2-Recap-7b</a></th>
<th><a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</th>
</tr>
</thead>
<tbody>
<tr>
<td>Avg accuracy</td>
<td>51.4%</td>
<td>58.7%</td>
<td>49.4%</td>
<td><strong>76.3%</strong></td>
</tr>
<tr>
<td>shot type</td>
<td>76.8%</td>
<td>82.5%</td>
<td>60.2%</td>
<td><strong>93.7%</strong></td>
</tr>
<tr>
<td>shot angle</td>
<td>60.0%</td>
<td>73.7%</td>
<td>52.4%</td>
<td><strong>89.8%</strong></td>
</tr>
<tr>
<td>shot position</td>
<td>28.4%</td>
<td>32.7%</td>
<td>23.6%</td>
<td><strong>83.1%</strong></td>
</tr>
<tr>
<td>camera motion</td>
<td>62.0%</td>
<td>61.2%</td>
<td>45.3%</td>
<td><strong>85.3%</strong></td>
</tr>
<tr>
<td>expression</td>
<td>43.6%</td>
<td>51.5%</td>
<td>54.3%</td>
<td><strong>68.8%</strong></td>
</tr>
<tr>
<td colspan="5" style="text-align: center; border-bottom: 1px solid #ddd; padding: 8px;"></td>
</tr>
<tr>
<td>TYPES_type</td>
<td>43.5%</td>
<td>49.7%</td>
<td>47.6%</td>
<td><strong>82.5%</strong></td>
</tr>
<tr>
<td>TYPES_sub_type</td>
<td>38.9%</td>
<td>44.9%</td>
<td>45.9%</td>
<td><strong>75.4%</strong></td>
</tr>
<tr>
<td>appearance</td>
<td>40.9%</td>
<td>52.0%</td>
<td>45.6%</td>
<td><strong>59.3%</strong></td>
</tr>
<tr>
<td>action</td>
<td>32.4%</td>
<td>52.0%</td>
<td><strong>69.8%</strong></td>
<td>68.8%</td>
</tr>
<tr>
<td>position</td>
<td>35.4%</td>
<td>48.6%</td>
<td>45.5%</td>
<td><strong>57.5%</strong></td>
</tr>
<tr>
<td>is_main_subject</td>
<td>58.5%</td>
<td>68.7%</td>
<td>69.7%</td>
<td><strong>80.9%</strong></td>
</tr>
<tr>
<td>environment</td>
<td>70.4%</td>
<td><strong>72.7%</strong></td>
<td>61.4%</td>
<td>70.5%</td>
</tr>
<tr>
<td>lighting</td>
<td>77.1%</td>
<td><strong>80.0%</strong></td>
<td>21.2%</td>
<td>76.5%</td>
</tr>
</tbody>
</table>
</p>
#### Reinforcement Learning
Inspired by the previous success in LLM, we propose to enhance the performance of the generative model by Reinforcement Learning. Specifically, we focus on the motion quality because we find that the main drawback of our generative model is:
- the generative model does not handle well with large, deformable motions.
- the generated videos may violate the physical law.
To avoid the degradation in other metrics, such as text alignment and video quality, we ensure the preference data pairs have comparable text alignment and video quality, while only the motion quality varies. This requirement poses greater challenges in obtaining preference annotations due to the inherently higher costs of human annotation. To address this challenge, we propose a semi-automatic pipeline that strategically combines automatically generated motion pairs and human annotation results. This hybrid approach not only enhances the data scale but also improves alignment with human preferences through curated quality control. Leveraging this enhanced dataset, we first train a specialized reward model to capture the generic motion quality differences between paired samples. This learned reward function subsequently guides the sample selection process for Direct Preference Optimization (DPO), enhancing the motion quality of the generative model.
#### Diffusion Forcing
We introduce the Diffusion Forcing Transformer to unlock our model’s ability to generate long videos. Diffusion Forcing is a training and sampling strategy where each token is assigned an independent noise level. This allows tokens to be denoised according to arbitrary, per-token schedules. Conceptually, this approach functions as a form of partial masking: a token with zero noise is fully unmasked, while complete noise fully masks it. Diffusion Forcing trains the model to "unmask" any combination of variably noised tokens, using the cleaner tokens as conditional information to guide the recovery of noisy ones. Building on this, our Diffusion Forcing Transformer can extend video generation indefinitely based on the last frames of the previous segment. Note that the synchronous full sequence diffusion is a special case of Diffusion Forcing, where all tokens share the same noise level. This relationship allows us to fine-tune the Diffusion Forcing Transformer from a full-sequence diffusion model.
#### High-Quality Supervised Fine-Tuning (SFT)
We implement two sequential high-quality supervised fine-tuning (SFT) stages at 540p and 720p resolutions respectively, with the initial SFT phase conducted immediately after pretraining but prior to reinforcement learning (RL) stage.This first-stage SFT serves as a conceptual equilibrium trainer, building upon the foundation model’s pretraining outcomes that utilized only fps24 video data, while strategically removing FPS embedding components to streamline thearchitecture. Trained with the high-quality concept-balanced samples, this phase establishes optimized initialization parameters for subsequent training processes. Following this, we execute a secondary high-resolution SFT at 720p after completing the diffusion forcing stage, incorporating identical loss formulations and the higher-quality concept-balanced datasets by the manually filter. This final refinement phase focuses on resolution increase such that the overall video quality will be further enhanced.
## Performance
To comprehensively evaluate our proposed method, we construct the SkyReels-Bench for human assessment and leveraged the open-source <a href="https://github.com/Vchitect/VBench">V-Bench</a> for automated evaluation. This allows us to compare our model with the state-of-the-art (SOTA) baselines, including both open-source and proprietary models.
#### Human Evaluation
For human evaluation, we design SkyReels-Bench with 1,020 text prompts, systematically assessing three dimensions: Instruction Adherence, Motion Quality, Consistency and Visual Quality. This benchmark is designed to evaluate both text-to-video (T2V) and image-to-video (I2V) generation models, providing comprehensive assessment across different generation paradigms. To ensure fairness, all models were evaluated under default settings with consistent resolutions, and no post-generation filtering was applied.
- Text To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model Name</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://runwayml.com/research/introducing-gen-3-alpha">Runway-Gen3 Alpha</a></td>
<td>2.53</td>
<td>2.19</td>
<td>2.57</td>
<td>3.23</td>
<td>2.11</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.82</td>
<td>2.64</td>
<td>2.81</td>
<td>3.20</td>
<td>2.61</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 STD Mode</a></td>
<td>2.99</td>
<td>2.77</td>
<td>3.05</td>
<td>3.39</td>
<td><strong>2.76</strong></td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.0</td>
<td>2.8</td>
<td>3.08</td>
<td>3.29</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>3.12</td>
<td>2.91</td>
<td>3.31</td>
<td><strong>3.54</strong></td>
<td>2.71</td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>3.14</strong></td>
<td><strong>3.15</strong></td>
<td><strong>3.35</strong></td>
<td>3.34</td>
<td>2.74</td>
</tr>
</tbody>
</table>
</p>
The evaluation demonstrates that our model achieves significant advancements in **instruction adherence (3.15)** compared to baseline methods, while maintaining competitive performance in **motion quality (2.74)** without sacrificing the **consistency (3.35)**.
- Image To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.84</td>
<td>2.97</td>
<td>2.95</td>
<td>2.87</td>
<td>2.56</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>2.85</td>
<td>3.10</td>
<td>2.81</td>
<td>3.00</td>
<td>2.48</td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.05</td>
<td>3.31</td>
<td>2.58</td>
<td>3.55</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 Pro Mode</a></td>
<td>3.4</td>
<td>3.56</td>
<td>3.03</td>
<td>3.58</td>
<td>3.41</td>
</tr>
<tr>
<td><a href="https://runwayml.com/research/introducing-runway-gen-4">Runway-Gen4</a></td>
<td>3.39</td>
<td>3.75</td>
<td>3.2</td>
<td>3.4</td>
<td>3.37</td>
</tr>
<tr>
<td>SkyReels-V2-DF</td>
<td>3.24</td>
<td>3.64</td>
<td>3.21</td>
<td>3.18</td>
<td>2.93</td>
</tr>
<tr>
<td>SkyReels-V2-I2V</td>
<td>3.29</td>
<td>3.42</td>
<td>3.18</td>
<td>3.56</td>
<td>3.01</td>
</tr>
</tbody>
</table>
</p>
Our results demonstrate that both **SkyReels-V2-I2V (3.29)** and **SkyReels-V2-DF (3.24)** achieve state-of-the-art performance among open-source models, significantly outperforming HunyuanVideo-13B (2.84) and Wan2.1-14B (2.85) across all quality dimensions. With an average score of 3.29, SkyReels-V2-I2V demonstrates comparable performance to proprietary models Kling-1.6 (3.4) and Runway-Gen4 (3.39).
#### VBench
To objectively compare SkyReels-V2 Model against other leading open-source Text-To-Video models, we conduct comprehensive evaluations using the public benchmark <a href="https://github.com/Vchitect/VBench">V-Bench</a>. Our evaluation specifically leverages the benchmark’s longer version prompt. For fair comparison with baseline models, we strictly follow their recommended setting for inference.
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Total Score</th>
<th>Quality Score</th>
<th>Semantic Score</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/hpcaitech/Open-Sora">OpenSora 2.0</a></td>
<td>81.5 %</td>
<td>82.1 %</td>
<td>78.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/THUDM/CogVideo">CogVideoX1.5-5B</a></td>
<td>80.3 %</td>
<td>80.9 %</td>
<td>77.9 %</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>82.7 %</td>
<td>84.4 %</td>
<td>76.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>83.7 %</td>
<td>84.2 %</td>
<td><strong>81.4 %</strong></td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>83.9 %</strong></td>
<td><strong>84.7 %</strong></td>
<td>80.8 %</td>
</tr>
</tbody>
</table>
</p>
The VBench results demonstrate that SkyReels-V2 outperforms all compared models including HunyuanVideo-13B and Wan2.1-14B, With the highest **total score (83.9%)** and **quality score (84.7%)**. In this evaluation, the semantic score is slightly lower than Wan2.1-14B, while we outperform Wan2.1-14B in human evaluations, with the primary gap attributed to V-Bench’s insufficient evaluation of shot-scenario semantic adherence.
## Acknowledgements
We would like to thank the contributors of <a href="https://github.com/Wan-Video/Wan2.1">Wan 2.1</a>, <a href="https://github.com/xdit-project/xDiT">XDit</a> and <a href="https://qwenlm.github.io/blog/qwen2.5/">Qwen 2.5</a> repositories, for their open research and contributions.
## Citation
```bibtex
@misc{chen2025skyreelsv2infinitelengthfilmgenerative,
title={SkyReels-V2: Infinite-length Film Generative Model},
author={Guibin Chen and Dixuan Lin and Jiangping Yang and Chunze Lin and Junchen Zhu and Mingyuan Fan and Hao Zhang and Sheng Chen and Zheng Chen and Chengcheng Ma and Weiming Xiong and Wei Wang and Nuo Pang and Kang Kang and Zhiheng Xu and Yuzhe Jin and Yupeng Liang and Yubing Song and Peng Zhao and Boyuan Xu and Di Qiu and Debang Li and Zhengcong Fei and Yang Li and Yahui Zhou},
year={2025},
eprint={2504.13074},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.13074},
}
```
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754889270
|
ggozzy
| 2025-08-11T05:16:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:15:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Skywork/SkyReels-V2-T2V-14B-540P-Diffusers
|
Skywork
| 2025-08-11T05:14:46Z | 0 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"video",
"video generation",
"text-to-video",
"en",
"arxiv:2504.13074",
"arxiv:2407.01392",
"license:other",
"diffusers:SkyReelsV2Pipeline",
"region:us"
] |
text-to-video
| 2025-06-14T07:50:01Z |
---
license: other
license_name: skywork-license
license_link: LICENSE
pipeline_tag: text-to-video
library_name: diffusers
tags:
- video
- video generation
language:
- en
---
<p align="center">
<img src="assets/logo2.png" alt="SkyReels Logo" width="50%">
</p>
<h1 align="center">SkyReels V2: Infinite-Length Film Generative Model</h1>
<p align="center">
📑 <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a> · 👋 <a href="https://www.skyreels.ai/home?utm_campaign=huggingface_skyreels_v2" target="_blank">Playground</a> · 💬 <a href="https://discord.gg/PwM6NYtccQ" target="_blank">Discord</a> · 🤗 <a href="https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9" target="_blank">Hugging Face</a> · 🤖 <a href="https://www.modelscope.cn/collections/SkyReels-V2-f665650130b144" target="_blank">ModelScope</a> · 🌐 <a href="https://github.com/SkyworkAI/SkyReels-V2" target="_blank">GitHub</a>
</p>
---
Welcome to the **SkyReels V2** repository! Here, you'll find the model weights for our infinite-length film generative models. To the best of our knowledge, it represents the first open-source video generative model employing **AutoRegressive Diffusion-Forcing architecture** that achieves the **SOTA performance** among publicly available models.
## 🔥🔥🔥 News!!
* Apr 24, 2025: 🔥 We release the 720P models, [SkyReels-V2-DF-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P) and [SkyReels-V2-I2V-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P). The former facilitates infinite-length autoregressive video generation, and the latter focuses on Image2Video synthesis.
* Apr 21, 2025: 👋 We release the inference code and model weights of [SkyReels-V2](https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9) Series Models and the video captioning model [SkyCaptioner-V1](https://huggingface.co/Skywork/SkyCaptioner-V1) .
* Apr 3, 2025: 🔥 We also release [SkyReels-A2](https://github.com/SkyworkAI/SkyReels-A2). This is an open-sourced controllable video generation framework capable of assembling arbitrary visual elements.
* Feb 18, 2025: 🔥 we released [SkyReels-A1](https://github.com/SkyworkAI/SkyReels-A1). This is an open-sourced and effective framework for portrait image animation.
* Feb 18, 2025: 🔥 We released [SkyReels-V1](https://github.com/SkyworkAI/SkyReels-V1). This is the first and most advanced open-source human-centric video foundation model.
## 🎥 Demos
<table>
<tr>
<td align="center">
<video src="https://github.com/user-attachments/assets/f6f9f9a7-5d5f-433c-9d73-d8d593b7ad25" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/0eb13415-f4d9-4aaf-bcd3-3031851109b9" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/dcd16603-5bf4-4786-8e4d-1ed23889d07a" width="100%"></video>
</td>
</tr>
</table>
The demos above showcase 30-second videos generated using our SkyReels-V2 Diffusion Forcing model.
## 📑 TODO List
- [x] <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a>
- [x] Checkpoints of the 14B and 1.3B Models Series
- [x] Single-GPU & Multi-GPU Inference Code
- [x] <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a>: A Video Captioning Model
- [x] Prompt Enhancer
- [x] Diffusers integration
- [ ] Checkpoints of the 5B Models Series
- [ ] Checkpoints of the Camera Director Models
- [ ] Checkpoints of the Step & Guidance Distill Model
## 🚀 Quickstart
Wan can run directly using 🤗 Diffusers!
```py
# pip install ftfy
import torch
from diffusers import AutoModel, SkyReelsV2Pipeline, UniPCMultistepScheduler
from diffusers.utils import export_to_video
model_id = "Skywork/SkyReels-V2-T2V-14B-540P-Diffusers"
vae = AutoModel.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
pipeline = SkyReelsV2Pipeline.from_pretrained(
model_id,
vae=vae,
torch_dtype=torch.bfloat16
)
flow_shift = 8.0 # 8.0 for T2V, 5.0 for I2V
pipeline.scheduler = UniPCMultistepScheduler.from_config(pipeline.scheduler.config, flow_shift=flow_shift)
pipeline = pipeline.to("cuda")
prompt = "A cat and a dog baking a cake together in a kitchen. The cat is carefully measuring flour, while the dog is stirring the batter with a wooden spoon. The kitchen is cozy, with sunlight streaming through the window."
output = pipeline(
prompt=prompt,
num_inference_steps=50,
height=544, # 720 for 720P
width=960, # 1280 for 720P
num_frames=97,
).frames[0]
export_to_video(output, "video.mp4", fps=24, quality=8)
```
#### Installation
```shell
# clone the repository.
git clone https://github.com/SkyworkAI/SkyReels-V2
cd SkyReels-V2
# Install dependencies. Test environment uses Python 3.10.12.
pip install -r requirements.txt
```
#### Model Download
You can download our models from Hugging Face:
<table>
<thead>
<tr>
<th>Type</th>
<th>Model Variant</th>
<th>Recommended Height/Width/Frame</th>
<th>Link</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="5">Diffusion Forcing</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Text-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Image-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="3">Camera Director</td>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
</tbody>
</table>
After downloading, set the model path in your generation commands:
#### Single GPU Inference
- **Diffusion Forcing for Long Video Generation**
The <a href="https://arxiv.org/abs/2407.01392">**Diffusion Forcing**</a> version model allows us to generate Infinite-Length videos. This model supports both **text-to-video (T2V)** and **image-to-video (I2V)** tasks, and it can perform inference in both synchronous and asynchronous modes. Here we demonstrate 2 running scripts as examples for long video generation. If you want to adjust the inference parameters, e.g., the duration of video, inference mode, read the Note below first.
synchronous generation for 10s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# synchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
asynchronous generation for 30s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# asynchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 5 \
--causal_block_size 5 \
--base_num_frames 97 \
--num_frames 737 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload
```
> **Note**:
> - If you want to run the **image-to-video (I2V)** task, add `--image ${image_path}` to your command and it is also better to use **text-to-video (T2V)**-like prompt which includes some descriptions of the first-frame image.
> - For long video generation, you can just switch the `--num_frames`, e.g., `--num_frames 257` for 10s video, `--num_frames 377` for 15s video, `--num_frames 737` for 30s video, `--num_frames 1457` for 60s video. The number is not strictly aligned with the logical frame number for specified time duration, but it is aligned with some training parameters, which means it may perform better. When you use asynchronous inference with causal_block_size > 1, the `--num_frames` should be carefully set.
> - You can use `--ar_step 5` to enable asynchronous inference. When asynchronous inference, `--causal_block_size 5` is recommended while it is not supposed to be set for synchronous generation. REMEMBER that the frame latent number inputted into the model in every iteration, e.g., base frame latent number (e.g., (97-1)//4+1=25 for base_num_frames=97) and (e.g., (237-97-(97-17)x1+17-1)//4+1=20 for base_num_frames=97, num_frames=237, overlap_history=17) for the last iteration, MUST be divided by causal_block_size. If you find it too hard to calculate and set proper values, just use our recommended setting above :). Asynchronous inference will take more steps to diffuse the whole sequence which means it will be SLOWER than synchronous mode. In our experiments, asynchronous inference may improve the instruction following and visual consistent performance.
> - To reduce peak VRAM, just lower the `--base_num_frames`, e.g., to 77 or 57, while keeping the same generative length `--num_frames` you want to generate. This may slightly reduce video quality, and it should not be set too small.
> - `--addnoise_condition` is used to help smooth the long video generation by adding some noise to the clean condition. Too large noise can cause the inconsistency as well. 20 is a recommended value, and you may try larger ones, but it is recommended to not exceed 50.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 51.2GB peak VRAM.
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
python3 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 43.4GB peak VRAM.
- **Prompt Enhancer**
The prompt enhancer is implemented based on <a href="https://huggingface.co/Qwen/Qwen2.5-32B-Instruct">Qwen2.5-32B-Instruct</a> and is utilized via the `--prompt_enhancer` parameter. It works ideally for short prompts, while for long prompts, it might generate an excessively lengthy prompt that could lead to over-saturation in the generative video. Note the peak memory of GPU is 64G+ if you use `--prompt_enhancer`. If you want to obtain the enhanced prompt separately, you can also run the prompt_enhancer script separately for testing. The steps are as follows:
```shell
cd skyreels_v2_infer/pipelines
python3 prompt_enhancer.py --prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface."
```
> **Note**:
> - `--prompt_enhancer` is not allowed if using `--use_usp`. We recommend running the skyreels_v2_infer/pipelines/prompt_enhancer.py script first to generate enhanced prompt before enabling the `--use_usp` parameter.
**Advanced Configuration Options**
Below are the key parameters you can customize for video generation:
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --prompt | | Text description for generating your video |
| --image | | Path to input image for image-to-video generation |
| --resolution | 540P or 720P | Output video resolution (select based on model type) |
| --num_frames | 97 or 121 | Total frames to generate (**97 for 540P models**, **121 for 720P models**) |
| --inference_steps | 50 | Number of denoising steps |
| --fps | 24 | Frames per second in the output video |
| --shift | 8.0 or 5.0 | Flow matching scheduler parameter (**8.0 for T2V**, **5.0 for I2V**) |
| --guidance_scale | 6.0 or 5.0 | Controls text adherence strength (**6.0 for T2V**, **5.0 for I2V**) |
| --seed | | Fixed seed for reproducible results (omit for random generation) |
| --offload | True | Offloads model components to CPU to reduce VRAM usage (recommended) |
| --use_usp | True | Enables multi-GPU acceleration with xDiT USP |
| --outdir | ./video_out | Directory where generated videos will be saved |
| --prompt_enhancer | True | Expand the prompt into a more detailed description |
| --teacache | False | Enables teacache for faster inference |
| --teacache_thresh | 0.2 | Higher speedup will cause to worse quality |
| --use_ret_steps | False | Retention Steps for teacache |
**Diffusion Forcing Additional Parameters**
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --ar_step | 0 | Controls asynchronous inference (0 for synchronous mode) |
| --base_num_frames | 97 or 121 | Base frame count (**97 for 540P**, **121 for 720P**) |
| --overlap_history | 17 | Number of frames to overlap for smooth transitions in long videos |
| --addnoise_condition | 20 | Improves consistency in long video generation |
| --causal_block_size | 5 | Recommended when using asynchronous inference (--ar_step > 0) |
#### Multi-GPU inference using xDiT USP
We use [xDiT](https://github.com/xdit-project/xDiT) USP to accelerate inference. For example, to generate a video with 2 GPUs, you can use the following command:
- **Diffusion Forcing**
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# diffusion forcing synchronous inference
torchrun --nproc_per_node=2 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--use_usp \
--offload \
--seed 42
```
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
torchrun --nproc_per_node=2 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--offload \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--use_usp \
--seed 42
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
## Contents
- [Abstract](#abstract)
- [Methodology of SkyReels-V2](#methodology-of-skyreels-v2)
- [Key Contributions of SkyReels-V2](#key-contributions-of-skyreels-v2)
- [Video Captioner](#video-captioner)
- [Reinforcement Learning](#reinforcement-learning)
- [Diffusion Forcing](#diffusion-forcing)
- [High-Quality Supervised Fine-Tuning(SFT)](#high-quality-supervised-fine-tuning-sft)
- [Performance](#performance)
- [Acknowledgements](#acknowledgements)
- [Citation](#citation)
---
## Abstract
Recent advances in video generation have been driven by diffusion models and autoregressive frameworks, yet critical challenges persist in harmonizing prompt adherence, visual quality, motion dynamics, and duration: compromises in motion dynamics to enhance temporal visual quality, constrained video duration (5-10 seconds) to prioritize resolution, and inadequate shot-aware generation stemming from general-purpose MLLMs' inability to interpret cinematic grammar, such as shot composition, actor expressions, and camera motions. These intertwined limitations hinder realistic long-form synthesis and professional film-style generation.
To address these limitations, we introduce SkyReels-V2, the world's first infinite-length film generative model using a Diffusion Forcing framework. Our approach synergizes Multi-modal Large Language Models (MLLM), Multi-stage Pretraining, Reinforcement Learning, and Diffusion Forcing techniques to achieve comprehensive optimization. Beyond its technical innovations, SkyReels-V2 enables multiple practical applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and multi-subject consistent video generation through our <a href="https://github.com/SkyworkAI/SkyReels-A2">Skyreels-A2</a> system.
## Methodology of SkyReels-V2
The SkyReels-V2 methodology consists of several interconnected components. It starts with a comprehensive data processing pipeline that prepares various quality training data. At its core is the Video Captioner architecture, which provides detailed annotations for video content. The system employs a multi-task pretraining strategy to build fundamental video generation capabilities. Post-training optimization includes Reinforcement Learning to enhance motion quality, Diffusion Forcing Training for generating extended videos, and High-quality Supervised Fine-Tuning (SFT) stages for visual refinement. The model runs on optimized computational infrastructure for efficient training and inference. SkyReels-V2 supports multiple applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and Elements-to-Video Generation.
<p align="center">
<img src="assets/main_pipeline.jpg" alt="mainpipeline" width="100%">
</p>
## Key Contributions of SkyReels-V2
#### Video Captioner
<a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> serves as our video captioning model for data annotation. This model is trained on the captioning result from the base model <a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Instruct</a> and the sub-expert captioners on a balanced video data. The balanced video data is a carefully curated dataset of approximately 2 million videos to ensure conceptual balance and annotation quality. Built upon the <a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Instruct</a> foundation model, <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> is fine-tuned to enhance performance in domain-specific video captioning tasks. To compare the performance with the SOTA models, we conducted a manual assessment of accuracy across different captioning fields using a test set of 1,000 samples. The proposed <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> achieves the highest average accuracy among the baseline models, and show a dramatic result in the shot related fields
<p align="center">
<table align="center">
<thead>
<tr>
<th>model</th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Ins.</a></th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Ins.</a></th>
<th><a href="https://huggingface.co/omni-research/Tarsier2-Recap-7b">Tarsier2-Recap-7b</a></th>
<th><a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</th>
</tr>
</thead>
<tbody>
<tr>
<td>Avg accuracy</td>
<td>51.4%</td>
<td>58.7%</td>
<td>49.4%</td>
<td><strong>76.3%</strong></td>
</tr>
<tr>
<td>shot type</td>
<td>76.8%</td>
<td>82.5%</td>
<td>60.2%</td>
<td><strong>93.7%</strong></td>
</tr>
<tr>
<td>shot angle</td>
<td>60.0%</td>
<td>73.7%</td>
<td>52.4%</td>
<td><strong>89.8%</strong></td>
</tr>
<tr>
<td>shot position</td>
<td>28.4%</td>
<td>32.7%</td>
<td>23.6%</td>
<td><strong>83.1%</strong></td>
</tr>
<tr>
<td>camera motion</td>
<td>62.0%</td>
<td>61.2%</td>
<td>45.3%</td>
<td><strong>85.3%</strong></td>
</tr>
<tr>
<td>expression</td>
<td>43.6%</td>
<td>51.5%</td>
<td>54.3%</td>
<td><strong>68.8%</strong></td>
</tr>
<tr>
<td colspan="5" style="text-align: center; border-bottom: 1px solid #ddd; padding: 8px;"></td>
</tr>
<tr>
<td>TYPES_type</td>
<td>43.5%</td>
<td>49.7%</td>
<td>47.6%</td>
<td><strong>82.5%</strong></td>
</tr>
<tr>
<td>TYPES_sub_type</td>
<td>38.9%</td>
<td>44.9%</td>
<td>45.9%</td>
<td><strong>75.4%</strong></td>
</tr>
<tr>
<td>appearance</td>
<td>40.9%</td>
<td>52.0%</td>
<td>45.6%</td>
<td><strong>59.3%</strong></td>
</tr>
<tr>
<td>action</td>
<td>32.4%</td>
<td>52.0%</td>
<td><strong>69.8%</strong></td>
<td>68.8%</td>
</tr>
<tr>
<td>position</td>
<td>35.4%</td>
<td>48.6%</td>
<td>45.5%</td>
<td><strong>57.5%</strong></td>
</tr>
<tr>
<td>is_main_subject</td>
<td>58.5%</td>
<td>68.7%</td>
<td>69.7%</td>
<td><strong>80.9%</strong></td>
</tr>
<tr>
<td>environment</td>
<td>70.4%</td>
<td><strong>72.7%</strong></td>
<td>61.4%</td>
<td>70.5%</td>
</tr>
<tr>
<td>lighting</td>
<td>77.1%</td>
<td><strong>80.0%</strong></td>
<td>21.2%</td>
<td>76.5%</td>
</tr>
</tbody>
</table>
</p>
#### Reinforcement Learning
Inspired by the previous success in LLM, we propose to enhance the performance of the generative model by Reinforcement Learning. Specifically, we focus on the motion quality because we find that the main drawback of our generative model is:
- the generative model does not handle well with large, deformable motions.
- the generated videos may violate the physical law.
To avoid the degradation in other metrics, such as text alignment and video quality, we ensure the preference data pairs have comparable text alignment and video quality, while only the motion quality varies. This requirement poses greater challenges in obtaining preference annotations due to the inherently higher costs of human annotation. To address this challenge, we propose a semi-automatic pipeline that strategically combines automatically generated motion pairs and human annotation results. This hybrid approach not only enhances the data scale but also improves alignment with human preferences through curated quality control. Leveraging this enhanced dataset, we first train a specialized reward model to capture the generic motion quality differences between paired samples. This learned reward function subsequently guides the sample selection process for Direct Preference Optimization (DPO), enhancing the motion quality of the generative model.
#### Diffusion Forcing
We introduce the Diffusion Forcing Transformer to unlock our model’s ability to generate long videos. Diffusion Forcing is a training and sampling strategy where each token is assigned an independent noise level. This allows tokens to be denoised according to arbitrary, per-token schedules. Conceptually, this approach functions as a form of partial masking: a token with zero noise is fully unmasked, while complete noise fully masks it. Diffusion Forcing trains the model to "unmask" any combination of variably noised tokens, using the cleaner tokens as conditional information to guide the recovery of noisy ones. Building on this, our Diffusion Forcing Transformer can extend video generation indefinitely based on the last frames of the previous segment. Note that the synchronous full sequence diffusion is a special case of Diffusion Forcing, where all tokens share the same noise level. This relationship allows us to fine-tune the Diffusion Forcing Transformer from a full-sequence diffusion model.
#### High-Quality Supervised Fine-Tuning (SFT)
We implement two sequential high-quality supervised fine-tuning (SFT) stages at 540p and 720p resolutions respectively, with the initial SFT phase conducted immediately after pretraining but prior to reinforcement learning (RL) stage.This first-stage SFT serves as a conceptual equilibrium trainer, building upon the foundation model’s pretraining outcomes that utilized only fps24 video data, while strategically removing FPS embedding components to streamline thearchitecture. Trained with the high-quality concept-balanced samples, this phase establishes optimized initialization parameters for subsequent training processes. Following this, we execute a secondary high-resolution SFT at 720p after completing the diffusion forcing stage, incorporating identical loss formulations and the higher-quality concept-balanced datasets by the manually filter. This final refinement phase focuses on resolution increase such that the overall video quality will be further enhanced.
## Performance
To comprehensively evaluate our proposed method, we construct the SkyReels-Bench for human assessment and leveraged the open-source <a href="https://github.com/Vchitect/VBench">V-Bench</a> for automated evaluation. This allows us to compare our model with the state-of-the-art (SOTA) baselines, including both open-source and proprietary models.
#### Human Evaluation
For human evaluation, we design SkyReels-Bench with 1,020 text prompts, systematically assessing three dimensions: Instruction Adherence, Motion Quality, Consistency and Visual Quality. This benchmark is designed to evaluate both text-to-video (T2V) and image-to-video (I2V) generation models, providing comprehensive assessment across different generation paradigms. To ensure fairness, all models were evaluated under default settings with consistent resolutions, and no post-generation filtering was applied.
- Text To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model Name</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://runwayml.com/research/introducing-gen-3-alpha">Runway-Gen3 Alpha</a></td>
<td>2.53</td>
<td>2.19</td>
<td>2.57</td>
<td>3.23</td>
<td>2.11</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.82</td>
<td>2.64</td>
<td>2.81</td>
<td>3.20</td>
<td>2.61</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 STD Mode</a></td>
<td>2.99</td>
<td>2.77</td>
<td>3.05</td>
<td>3.39</td>
<td><strong>2.76</strong></td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.0</td>
<td>2.8</td>
<td>3.08</td>
<td>3.29</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>3.12</td>
<td>2.91</td>
<td>3.31</td>
<td><strong>3.54</strong></td>
<td>2.71</td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>3.14</strong></td>
<td><strong>3.15</strong></td>
<td><strong>3.35</strong></td>
<td>3.34</td>
<td>2.74</td>
</tr>
</tbody>
</table>
</p>
The evaluation demonstrates that our model achieves significant advancements in **instruction adherence (3.15)** compared to baseline methods, while maintaining competitive performance in **motion quality (2.74)** without sacrificing the **consistency (3.35)**.
- Image To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.84</td>
<td>2.97</td>
<td>2.95</td>
<td>2.87</td>
<td>2.56</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>2.85</td>
<td>3.10</td>
<td>2.81</td>
<td>3.00</td>
<td>2.48</td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.05</td>
<td>3.31</td>
<td>2.58</td>
<td>3.55</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 Pro Mode</a></td>
<td>3.4</td>
<td>3.56</td>
<td>3.03</td>
<td>3.58</td>
<td>3.41</td>
</tr>
<tr>
<td><a href="https://runwayml.com/research/introducing-runway-gen-4">Runway-Gen4</a></td>
<td>3.39</td>
<td>3.75</td>
<td>3.2</td>
<td>3.4</td>
<td>3.37</td>
</tr>
<tr>
<td>SkyReels-V2-DF</td>
<td>3.24</td>
<td>3.64</td>
<td>3.21</td>
<td>3.18</td>
<td>2.93</td>
</tr>
<tr>
<td>SkyReels-V2-I2V</td>
<td>3.29</td>
<td>3.42</td>
<td>3.18</td>
<td>3.56</td>
<td>3.01</td>
</tr>
</tbody>
</table>
</p>
Our results demonstrate that both **SkyReels-V2-I2V (3.29)** and **SkyReels-V2-DF (3.24)** achieve state-of-the-art performance among open-source models, significantly outperforming HunyuanVideo-13B (2.84) and Wan2.1-14B (2.85) across all quality dimensions. With an average score of 3.29, SkyReels-V2-I2V demonstrates comparable performance to proprietary models Kling-1.6 (3.4) and Runway-Gen4 (3.39).
#### VBench
To objectively compare SkyReels-V2 Model against other leading open-source Text-To-Video models, we conduct comprehensive evaluations using the public benchmark <a href="https://github.com/Vchitect/VBench">V-Bench</a>. Our evaluation specifically leverages the benchmark’s longer version prompt. For fair comparison with baseline models, we strictly follow their recommended setting for inference.
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Total Score</th>
<th>Quality Score</th>
<th>Semantic Score</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/hpcaitech/Open-Sora">OpenSora 2.0</a></td>
<td>81.5 %</td>
<td>82.1 %</td>
<td>78.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/THUDM/CogVideo">CogVideoX1.5-5B</a></td>
<td>80.3 %</td>
<td>80.9 %</td>
<td>77.9 %</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>82.7 %</td>
<td>84.4 %</td>
<td>76.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>83.7 %</td>
<td>84.2 %</td>
<td><strong>81.4 %</strong></td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>83.9 %</strong></td>
<td><strong>84.7 %</strong></td>
<td>80.8 %</td>
</tr>
</tbody>
</table>
</p>
The VBench results demonstrate that SkyReels-V2 outperforms all compared models including HunyuanVideo-13B and Wan2.1-14B, With the highest **total score (83.9%)** and **quality score (84.7%)**. In this evaluation, the semantic score is slightly lower than Wan2.1-14B, while we outperform Wan2.1-14B in human evaluations, with the primary gap attributed to V-Bench’s insufficient evaluation of shot-scenario semantic adherence.
## Acknowledgements
We would like to thank the contributors of <a href="https://github.com/Wan-Video/Wan2.1">Wan 2.1</a>, <a href="https://github.com/xdit-project/xDiT">XDit</a> and <a href="https://qwenlm.github.io/blog/qwen2.5/">Qwen 2.5</a> repositories, for their open research and contributions.
## Citation
```bibtex
@misc{chen2025skyreelsv2infinitelengthfilmgenerative,
title={SkyReels-V2: Infinite-length Film Generative Model},
author={Guibin Chen and Dixuan Lin and Jiangping Yang and Chunze Lin and Junchen Zhu and Mingyuan Fan and Hao Zhang and Sheng Chen and Zheng Chen and Chengcheng Ma and Weiming Xiong and Wei Wang and Nuo Pang and Kang Kang and Zhiheng Xu and Yuzhe Jin and Yupeng Liang and Yubing Song and Peng Zhao and Boyuan Xu and Di Qiu and Debang Li and Zhengcong Fei and Yang Li and Yahui Zhou},
year={2025},
eprint={2504.13074},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.13074},
}
```
|
roeker/blockassist-bc-quick_wiry_owl_1754889174
|
roeker
| 2025-08-11T05:14:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:13:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754888720
|
ggozzy
| 2025-08-11T05:06:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:06:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nijes/mixed_1e-5
|
nijes
| 2025-08-11T05:04:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-11T04:58:06Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
model-index:
- name: nijes/mixed_1e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nijes/mixed_1e-5
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2613
- Cer: 10.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.698 | 1.0 | 1114 | 0.4865 | 14.7030 |
| 0.5294 | 2.0 | 2228 | 0.3672 | 13.2286 |
| 0.3749 | 3.0 | 3342 | 0.3083 | 12.6513 |
| 0.3479 | 4.0 | 4456 | 0.2905 | 12.6933 |
| 0.2457 | 5.0 | 5570 | 0.2796 | 11.9239 |
| 0.2168 | 6.0 | 6684 | 0.2719 | 11.9318 |
| 0.2455 | 7.0 | 7798 | 0.2724 | 11.4557 |
| 0.2279 | 8.0 | 8912 | 0.2631 | 11.1006 |
| 0.2242 | 9.0 | 10026 | 0.2637 | 10.9585 |
| 0.1664 | 10.0 | 11140 | 0.2613 | 10.9862 |
### Framework versions
- Transformers 4.53.0
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.4
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754888573
|
IvanJAjebu
| 2025-08-11T05:04:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:03:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Lakshan2003/Gemma3-4B-financial-customerservice
|
Lakshan2003
| 2025-08-11T05:03:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"arxiv:1910.09700",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"region:us"
] | null | 2025-08-11T05:02:55Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
library_name: peft
tags:
- base_model:adapter:unsloth/gemma-3-4b-it-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
roeker/blockassist-bc-quick_wiry_owl_1754888434
|
roeker
| 2025-08-11T05:01:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T05:01:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mradermacher/Llama-Primus-Nemotron-70B-Instruct-GGUF
|
mradermacher
| 2025-08-11T05:01:05Z | 100 | 0 |
transformers
|
[
"transformers",
"gguf",
"cybersecurity",
"en",
"ja",
"dataset:trend-cybertron/Primus-Nemotron-CC",
"dataset:trendmicro-ailab/Primus-FineWeb",
"dataset:trendmicro-ailab/Primus-Instruct",
"base_model:trend-cybertron/Llama-Primus-Nemotron-70B-Instruct",
"base_model:quantized:trend-cybertron/Llama-Primus-Nemotron-70B-Instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-04-21T16:38:35Z |
---
base_model: trend-cybertron/Llama-Primus-Nemotron-70B-Instruct
datasets:
- trend-cybertron/Primus-Nemotron-CC
- trendmicro-ailab/Primus-FineWeb
- trendmicro-ailab/Primus-Instruct
extra_gated_fields:
Affiliation: text
Country: country
I want to use this model for:
options:
- Research
- Commercial
- label: Other
value: other
type: select
Job title:
options:
- Student
- Research graduate
- AI researcher
- AI developer/engineer
- Cybersecurity researcher
- Reporter
- Other
type: select
geo: ip_location
language:
- en
- ja
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- cybersecurity
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/trend-cybertron/Llama-Primus-Nemotron-70B-Instruct
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-Primus-Nemotron-70B-Instruct-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Instruct-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Instruct.Q2_K.gguf) | Q2_K | 26.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Instruct-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Instruct.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Instruct-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Instruct.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Instruct-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Instruct.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Instruct-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Instruct.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Instruct-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Instruct.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Instruct-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Instruct.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Instruct-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Instruct.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Instruct-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Instruct.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Instruct-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Instruct.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Instruct-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Instruct.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
| [PART 1](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Instruct-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Instruct.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Instruct-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Instruct.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Qika/Llama-3.2-3B-Instruct
|
Qika
| 2025-08-11T04:58:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T04:49:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF
|
mradermacher
| 2025-08-11T04:57:33Z | 173 | 0 |
transformers
|
[
"transformers",
"gguf",
"cybersecurity",
"en",
"ja",
"dataset:trend-cybertron/Primus-Nemotron-CC",
"dataset:trendmicro-ailab/Primus-FineWeb",
"base_model:trend-cybertron/Llama-Primus-Nemotron-70B-Base",
"base_model:quantized:trend-cybertron/Llama-Primus-Nemotron-70B-Base",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-04-23T03:48:12Z |
---
base_model: trend-cybertron/Llama-Primus-Nemotron-70B-Base
datasets:
- trend-cybertron/Primus-Nemotron-CC
- trendmicro-ailab/Primus-FineWeb
extra_gated_fields:
Affiliation: text
Country: country
I want to use this model for:
options:
- Research
- Commercial
- label: Other
value: other
type: select
Job title:
options:
- Student
- Research graduate
- AI researcher
- AI developer/engineer
- Cybersecurity researcher
- Reporter
- Other
type: select
geo: ip_location
language:
- en
- ja
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- cybersecurity
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/trend-cybertron/Llama-Primus-Nemotron-70B-Base
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Llama-Primus-Nemotron-70B-Base-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-Primus-Nemotron-70B-Base-i1-GGUF/resolve/main/Llama-Primus-Nemotron-70B-Base.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754888141
|
ggozzy
| 2025-08-11T04:57:05Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T04:56:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Alizabethli/Qwen32_SFT_RL_gpt
|
Alizabethli
| 2025-08-11T04:56:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T04:29:18Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: Qwen32_SFT_RL_gpt
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen32_SFT_RL_gpt
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Alizabethli/Qwen32_SFT_RL_gpt", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.5.1+cu121
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
giovannidemuri/llama8b-er-afg-v13-seed2-mcdonald_lora
|
giovannidemuri
| 2025-08-11T04:53:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:adapter:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"region:us"
] | null | 2025-08-11T02:58:51Z |
---
library_name: peft
license: llama3.1
base_model: meta-llama/Llama-3.1-8B
tags:
- generated_from_trainer
model-index:
- name: llama8b-er-afg-v13-seed2-mcdonald_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama8b-er-afg-v13-seed2-mcdonald_lora
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.0+cu128
- Datasets 4.0.0
- Tokenizers 0.21.0
|
datasetsANDmodels/melody-maker-from_description
|
datasetsANDmodels
| 2025-08-11T04:51:43Z | 0 | 0 | null |
[
"pytorch",
"musicgen",
"base_model:datasetsANDmodels/melody-maker-from_description",
"base_model:finetune:datasetsANDmodels/melody-maker-from_description",
"region:us"
] | null | 2025-08-11T04:21:29Z |
---
base_model:
- datasetsANDmodels/melody-maker-from_description
---
This model generates a melody based on a description.
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754887793
|
ggozzy
| 2025-08-11T04:51:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T04:51:00Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
awilliam60412/Llama-3.2-3B-Instruct-Training_one_epoch
|
awilliam60412
| 2025-08-11T04:49:56Z | 0 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-11T04:49:46Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** awilliam60412
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
roeker/blockassist-bc-quick_wiry_owl_1754887695
|
roeker
| 2025-08-11T04:49:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T04:49:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Qika/Llama-3.2-1B-Instruct
|
Qika
| 2025-08-11T04:48:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T04:44:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jahyungu/Llama-3.2-1B-Instruct_TACO
|
jahyungu
| 2025-08-11T04:45:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"dataset:taco",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T00:09:45Z |
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B-Instruct
tags:
- generated_from_trainer
datasets:
- taco
model-index:
- name: Llama-3.2-1B-Instruct_TACO
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-Instruct_TACO
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on the taco dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
shreyanshkedawat/dummy-model
|
shreyanshkedawat
| 2025-08-11T04:40:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-11T04:40:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754886200
|
Sayemahsjn
| 2025-08-11T04:40:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T04:40:21Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
michaelcpage345/blockassist-bc-miniature_deadly_anteater_1754885175
|
michaelcpage345
| 2025-08-11T04:36:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"miniature deadly anteater",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T04:36:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- miniature deadly anteater
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sumukha2002/Sanskrit-English-Translator-Context
|
sumukha2002
| 2025-08-11T04:34:39Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:sumukha2002/Sanskrit-English-Translator-Context",
"base_model:finetune:sumukha2002/Sanskrit-English-Translator-Context",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T22:43:16Z |
---
library_name: transformers
base_model: sumukha2002/Sanskrit-English-Translator-Context
tags:
- generated_from_trainer
model-index:
- name: Sanskrit-English-Translator-Context
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sanskrit-English-Translator-Context
This model is a fine-tuned version of [sumukha2002/Sanskrit-English-Translator-Context](https://huggingface.co/sumukha2002/Sanskrit-English-Translator-Context) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0089
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 41 | 1.0037 |
| No log | 2.0 | 82 | 1.0023 |
| 1.1057 | 3.0 | 123 | 1.0022 |
| 1.1057 | 4.0 | 164 | 1.0078 |
| 0.9573 | 5.0 | 205 | 1.0088 |
| 0.9573 | 6.0 | 246 | 1.0089 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.21.4
|
edyinmyhome/edyKim-llama-2-7b-miniguanaco
|
edyinmyhome
| 2025-08-11T04:27:37Z | 0 | 0 |
peft
|
[
"peft",
"pytorch",
"llama",
"region:us"
] | null | 2025-08-11T03:48:04Z |
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
- PEFT 0.4.0
|
tinman2030/llama-3.2-1b-timed_model
|
tinman2030
| 2025-08-11T04:27:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-04-09T20:35:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Timia123/simpo_inpo_iter3_aug10
|
Timia123
| 2025-08-11T04:25:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"alignment-handbook",
"inpo",
"simpo",
"gemma-2",
"generated_from_trainer",
"conversational",
"base_model:google/gemma-2-9b-it",
"base_model:finetune:google/gemma-2-9b-it",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T21:16:58Z |
---
library_name: transformers
base_model: google/gemma-2-9b-it
tags:
- alignment-handbook
- inpo
- simpo
- gemma-2
- generated_from_trainer
# The 'datasets' key has been removed as it was a local path.
# You can add a Hub dataset ID here if applicable, e.g., 'HuggingFaceH4/ultrafeedback_binarized'
model-index:
- name: gemma-2-9b-it_inpo_stage_3
results: []
---
<!-- This model card has been generated automatically. Please proofread and complete it. -->
# gemma-2-9b-it_inpo_stage_3
This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it) on a private dataset using the SimPO/INPO alignment method.
## Model description
This model was trained as part of a SimPO/INPO experiment. More information about the training process and objectives can be added here.
## Intended uses & limitations
This model is intended for research purposes and as a demonstration of the SimPO/INPO alignment technique. It has not been evaluated for production use and may exhibit biases or generate unsafe content.
## Training and evaluation data
The model was trained on the private `data/inpo_iter3/pref` dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
*No training results provided.*
### Framework versions
- Transformers 4.44.2
- Pytorch 2.2.2
- Datasets 2.14.6
- Tokenizers 0.19.1
|
Vyrist13/Sentiment_Model
|
Vyrist13
| 2025-08-11T04:21:14Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"autotrain",
"base_model:indobenchmark/indobert-large-p2",
"base_model:finetune:indobenchmark/indobert-large-p2",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T04:21:13Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: indobenchmark/indobert-large-p2
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.32718178629875183
f1_macro: 0.9066774093109434
f1_micro: 0.9064200217627857
f1_weighted: 0.9066709005064728
precision_macro: 0.9081976476309634
precision_micro: 0.9064200217627857
precision_weighted: 0.9081923277529623
recall_macro: 0.9064274503316717
recall_micro: 0.9064200217627857
recall_weighted: 0.9064200217627857
accuracy: 0.9064200217627857
|
roeker/blockassist-bc-quick_wiry_owl_1754885847
|
roeker
| 2025-08-11T04:19:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T04:18:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lemonhat/Qwen2.5-7B-agenttuning_v1_tag5
|
lemonhat
| 2025-08-11T04:18:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T04:17:12Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-7B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: agenttuning_v1_tag5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# agenttuning_v1_tag5
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the agenttuning_v1_tag5 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4105
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 4
- total_eval_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5552 | 0.0829 | 100 | 0.4911 |
| 0.4345 | 0.1658 | 200 | 0.4820 |
| 0.3409 | 0.2488 | 300 | 0.4472 |
| 0.4594 | 0.3317 | 400 | 0.4367 |
| 0.4461 | 0.4146 | 500 | 0.4403 |
| 0.5229 | 0.4975 | 600 | 0.4308 |
| 0.3798 | 0.5804 | 700 | 0.4193 |
| 0.325 | 0.6633 | 800 | 0.4246 |
| 0.319 | 0.7463 | 900 | 0.4120 |
| 0.4063 | 0.8292 | 1000 | 0.4113 |
| 0.4328 | 0.9121 | 1100 | 0.4114 |
| 0.4578 | 0.9950 | 1200 | 0.4111 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.7.1+cu126
- Datasets 3.1.0
- Tokenizers 0.20.3
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754885730
|
ggozzy
| 2025-08-11T04:17:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T04:16:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
warlockmage/blockassist-bc-bold_scurrying_robin_1754885641
|
warlockmage
| 2025-08-11T04:14:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bold scurrying robin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T04:14:24Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bold scurrying robin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zuruyu/blockassist-bc-endangered_pesty_chinchilla_1754885498
|
zuruyu
| 2025-08-11T04:12:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"endangered pesty chinchilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T04:12:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- endangered pesty chinchilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nlee-208/limo_S-dsr1b_T-q32b_25
|
nlee-208
| 2025-08-11T04:12:07Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T03:07:45Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: transformers
model_name: limo_S-dsr1b_T-q32b_25
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for limo_S-dsr1b_T-q32b_25
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nlee-208/limo_S-dsr1b_T-q32b_25", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nlee28/cross1/runs/kbup7uoj)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.53.3
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Creepyk/blockassist-bc-monstrous_furry_caribou_1754884412
|
Creepyk
| 2025-08-11T04:09:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous furry caribou",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T04:09:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous furry caribou
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rahulseetharaman/reranker-bert-uncased_L-10_H-256_A-4-msmarco-bce
|
rahulseetharaman
| 2025-08-11T04:07:09Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"cross-encoder",
"reranker",
"generated_from_trainer",
"dataset_size:90000",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"en",
"dataset:sentence-transformers/msmarco",
"arxiv:1908.10084",
"base_model:bansalaman18/bert-uncased_L-10_H-256_A-4",
"base_model:finetune:bansalaman18/bert-uncased_L-10_H-256_A-4",
"model-index",
"region:us"
] |
text-ranking
| 2025-08-11T04:07:07Z |
---
language:
- en
tags:
- sentence-transformers
- cross-encoder
- reranker
- generated_from_trainer
- dataset_size:90000
- loss:BinaryCrossEntropyLoss
base_model: bansalaman18/bert-uncased_L-10_H-256_A-4
datasets:
- sentence-transformers/msmarco
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
model-index:
- name: CrossEncoder based on bansalaman18/bert-uncased_L-10_H-256_A-4
results:
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoMSMARCO R100
type: NanoMSMARCO_R100
metrics:
- type: map
value: 0.0872
name: Map
- type: mrr@10
value: 0.0649
name: Mrr@10
- type: ndcg@10
value: 0.0903
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoNFCorpus R100
type: NanoNFCorpus_R100
metrics:
- type: map
value: 0.2815
name: Map
- type: mrr@10
value: 0.4108
name: Mrr@10
- type: ndcg@10
value: 0.2897
name: Ndcg@10
- task:
type: cross-encoder-reranking
name: Cross Encoder Reranking
dataset:
name: NanoNQ R100
type: NanoNQ_R100
metrics:
- type: map
value: 0.0564
name: Map
- type: mrr@10
value: 0.0317
name: Mrr@10
- type: ndcg@10
value: 0.0532
name: Ndcg@10
- task:
type: cross-encoder-nano-beir
name: Cross Encoder Nano BEIR
dataset:
name: NanoBEIR R100 mean
type: NanoBEIR_R100_mean
metrics:
- type: map
value: 0.1417
name: Map
- type: mrr@10
value: 0.1692
name: Mrr@10
- type: ndcg@10
value: 0.1444
name: Ndcg@10
---
# CrossEncoder based on bansalaman18/bert-uncased_L-10_H-256_A-4
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [bansalaman18/bert-uncased_L-10_H-256_A-4](https://huggingface.co/bansalaman18/bert-uncased_L-10_H-256_A-4) on the [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco) dataset using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [bansalaman18/bert-uncased_L-10_H-256_A-4](https://huggingface.co/bansalaman18/bert-uncased_L-10_H-256_A-4) <!-- at revision 2c743a1678c7e2a9a2ba9cda4400b08cfa7054fc -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
- **Training Dataset:**
- [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("rahulseetharaman/reranker-bert-uncased_L-10_H-256_A-4-msmarco-bce")
# Get scores for pairs of texts
pairs = [
['are solar pool covers worth it', 'If you are using Onga pool pumps or Hurlcon pool pumps, then you need not worry about them getting overheated for they are one of the best pool pumps available on the market. If you want to know about What causes a pool pump to overheat so please visit here onga pool pumps.'],
['how much do Customer Service Agent: Ticketing/Gate make in general', '$41,000. Average Airport Customer Service Ticketing Gate Agent salaries for job postings in Houston, TX are 13% higher than average Airport Customer Service Ticketing Gate Agent salaries for job postings nationwide.verage Airport Customer Service Ticketing Gate Agent salaries for job postings in Houston, TX are 13% higher than average Airport Customer Service Ticketing Gate Agent salaries for job postings nationwide.'],
['what is adverse selection economics', 'The last first woman to win the Nobel in her category was Elinor Ostrom, who shared the 2009 economics prize for her groundbreaking analysis of common property. The wait was so long for a woman economics laureate in part because that prize wasnâ\x80\x99t established until 1969.'],
['where do newts live', 'Newts can be found living in North America, Europe and Asia. They are not found in Australia or Africa. In fact there are no species of salamander that live in Australia and only a few found in Northern Africa. Seven species of newt live in Europe.'],
['define: rolling hourly average', 'An example of two moving average curves. In statistics, a moving average (rolling average or running average) is a calculation to analyze data points by creating series of averages of different subsets of the full data set. It is also called a moving mean (MM) or rolling mean and is a type of finite impulse response filter.'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'are solar pool covers worth it',
[
'If you are using Onga pool pumps or Hurlcon pool pumps, then you need not worry about them getting overheated for they are one of the best pool pumps available on the market. If you want to know about What causes a pool pump to overheat so please visit here onga pool pumps.',
'$41,000. Average Airport Customer Service Ticketing Gate Agent salaries for job postings in Houston, TX are 13% higher than average Airport Customer Service Ticketing Gate Agent salaries for job postings nationwide.verage Airport Customer Service Ticketing Gate Agent salaries for job postings in Houston, TX are 13% higher than average Airport Customer Service Ticketing Gate Agent salaries for job postings nationwide.',
'The last first woman to win the Nobel in her category was Elinor Ostrom, who shared the 2009 economics prize for her groundbreaking analysis of common property. The wait was so long for a woman economics laureate in part because that prize wasnâ\x80\x99t established until 1969.',
'Newts can be found living in North America, Europe and Asia. They are not found in Australia or Africa. In fact there are no species of salamander that live in Australia and only a few found in Northern Africa. Seven species of newt live in Europe.',
'An example of two moving average curves. In statistics, a moving average (rolling average or running average) is a calculation to analyze data points by creating series of averages of different subsets of the full data set. It is also called a moving mean (MM) or rolling mean and is a type of finite impulse response filter.',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Reranking
* Datasets: `NanoMSMARCO_R100`, `NanoNFCorpus_R100` and `NanoNQ_R100`
* Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator) with these parameters:
```json
{
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | NanoMSMARCO_R100 | NanoNFCorpus_R100 | NanoNQ_R100 |
|:------------|:---------------------|:---------------------|:---------------------|
| map | 0.0872 (-0.4024) | 0.2815 (+0.0205) | 0.0564 (-0.3632) |
| mrr@10 | 0.0649 (-0.4126) | 0.4108 (-0.0890) | 0.0317 (-0.3949) |
| **ndcg@10** | **0.0903 (-0.4501)** | **0.2897 (-0.0353)** | **0.0532 (-0.4474)** |
#### Cross Encoder Nano BEIR
* Dataset: `NanoBEIR_R100_mean`
* Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator) with these parameters:
```json
{
"dataset_names": [
"msmarco",
"nfcorpus",
"nq"
],
"rerank_k": 100,
"at_k": 10,
"always_rerank_positives": true
}
```
| Metric | Value |
|:------------|:---------------------|
| map | 0.1417 (-0.2484) |
| mrr@10 | 0.1692 (-0.2989) |
| **ndcg@10** | **0.1444 (-0.3110)** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### msmarco
* Dataset: [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco) at [9e329ed](https://huggingface.co/datasets/sentence-transformers/msmarco/tree/9e329ed2e649c9d37b0d91dd6b764ff6fe671d83)
* Size: 90,000 training samples
* Columns: <code>query</code>, <code>passage</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | query | passage | score |
|:--------|:-----------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 7 characters</li><li>mean: 33.59 characters</li><li>max: 164 characters</li></ul> | <ul><li>min: 49 characters</li><li>mean: 340.88 characters</li><li>max: 1018 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.53</li><li>max: 1.0</li></ul> |
* Samples:
| query | passage | score |
|:---------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>fantomcoin current price</code> | <code>The current Average monthly rental price per square meter for a studio property in Pretoria / Tshwane on Gumtree is R 47.</code> | <code>0.0</code> |
| <code>ddp price definition</code> | <code>Delivered Duty Paid - DDP. Loading the player... What does 'Delivered Duty Paid - DDP' mean. Delivered duty paid (DDP) is a transaction where the seller pays for the total costs associated with transporting goods and is fully responsible for the goods until they are received and transferred to the buyer.</code> | <code>1.0</code> |
| <code>what is neil diamond's hometown</code> | <code>Oct 6, 2014 8:00 am ET. Brooklyn native Neil Diamond played his first-ever hometown show last week with a 10-song set at Erasmus Hall High School, where he sang in the choir during the two years he was a student there. Speakeasy today premieres a clip of Diamond performing the new song âSomething Blueâ at that concert.</code> | <code>1.0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Evaluation Dataset
#### msmarco
* Dataset: [msmarco](https://huggingface.co/datasets/sentence-transformers/msmarco) at [9e329ed](https://huggingface.co/datasets/sentence-transformers/msmarco/tree/9e329ed2e649c9d37b0d91dd6b764ff6fe671d83)
* Size: 10,000 evaluation samples
* Columns: <code>query</code>, <code>passage</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | query | passage | score |
|:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 9 characters</li><li>mean: 34.17 characters</li><li>max: 146 characters</li></ul> | <ul><li>min: 83 characters</li><li>mean: 349.58 characters</li><li>max: 974 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.51</li><li>max: 1.0</li></ul> |
* Samples:
| query | passage | score |
|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>are solar pool covers worth it</code> | <code>If you are using Onga pool pumps or Hurlcon pool pumps, then you need not worry about them getting overheated for they are one of the best pool pumps available on the market. If you want to know about What causes a pool pump to overheat so please visit here onga pool pumps.</code> | <code>0.0</code> |
| <code>how much do Customer Service Agent: Ticketing/Gate make in general</code> | <code>$41,000. Average Airport Customer Service Ticketing Gate Agent salaries for job postings in Houston, TX are 13% higher than average Airport Customer Service Ticketing Gate Agent salaries for job postings nationwide.verage Airport Customer Service Ticketing Gate Agent salaries for job postings in Houston, TX are 13% higher than average Airport Customer Service Ticketing Gate Agent salaries for job postings nationwide.</code> | <code>1.0</code> |
| <code>what is adverse selection economics</code> | <code>The last first woman to win the Nobel in her category was Elinor Ostrom, who shared the 2009 economics prize for her groundbreaking analysis of common property. The wait was so long for a woman economics laureate in part because that prize wasnât established until 1969.</code> | <code>0.0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `seed`: 12
- `bf16`: True
- `dataloader_num_workers`: 4
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 12
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_R100_ndcg@10 | NanoNFCorpus_R100_ndcg@10 | NanoNQ_R100_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
|:----------:|:---------:|:-------------:|:---------------:|:------------------------:|:-------------------------:|:--------------------:|:--------------------------:|
| -1 | -1 | - | - | 0.0797 (-0.4607) | 0.2817 (-0.0434) | 0.0302 (-0.4704) | 0.1305 (-0.3248) |
| 0.0002 | 1 | 0.6362 | - | - | - | - | - |
| 0.1778 | 1000 | 0.6946 | 0.7033 | 0.0227 (-0.5178) | 0.2131 (-0.1119) | 0.0285 (-0.4722) | 0.0881 (-0.3673) |
| 0.3556 | 2000 | 0.6943 | 0.6900 | 0.0155 (-0.5250) | 0.2458 (-0.0792) | 0.0718 (-0.4289) | 0.1110 (-0.3443) |
| 0.5333 | 3000 | 0.6924 | 0.6786 | 0.0399 (-0.5005) | 0.2142 (-0.1109) | 0.0626 (-0.4380) | 0.1056 (-0.3498) |
| 0.7111 | 4000 | 0.6821 | 0.6755 | 0.0379 (-0.5025) | 0.2399 (-0.0851) | 0.0682 (-0.4325) | 0.1153 (-0.3400) |
| 0.8889 | 5000 | 0.6749 | 0.6678 | 0.0466 (-0.4938) | 0.2542 (-0.0709) | 0.0947 (-0.4060) | 0.1318 (-0.3235) |
| 1.0667 | 6000 | 0.6699 | 0.6661 | 0.0536 (-0.4868) | 0.2670 (-0.0581) | 0.0498 (-0.4508) | 0.1235 (-0.3319) |
| 1.2444 | 7000 | 0.6576 | 0.6651 | 0.0389 (-0.5016) | 0.2491 (-0.0760) | 0.0450 (-0.4557) | 0.1110 (-0.3444) |
| 1.4222 | 8000 | 0.6579 | 0.6891 | 0.0375 (-0.5029) | 0.2852 (-0.0398) | 0.0370 (-0.4637) | 0.1199 (-0.3355) |
| 1.6 | 9000 | 0.6459 | 0.6646 | 0.0553 (-0.4851) | 0.2706 (-0.0544) | 0.0461 (-0.4545) | 0.1240 (-0.3314) |
| 1.7778 | 10000 | 0.6576 | 0.6592 | 0.0493 (-0.4911) | 0.2633 (-0.0618) | 0.0352 (-0.4654) | 0.1159 (-0.3394) |
| 1.9556 | 11000 | 0.6499 | 0.6589 | 0.0631 (-0.4773) | 0.2778 (-0.0472) | 0.0581 (-0.4426) | 0.1330 (-0.3224) |
| 2.1333 | 12000 | 0.6289 | 0.6755 | 0.0744 (-0.4660) | 0.2747 (-0.0503) | 0.0386 (-0.4620) | 0.1292 (-0.3261) |
| 2.3111 | 13000 | 0.6233 | 0.6888 | 0.0617 (-0.4787) | 0.2963 (-0.0287) | 0.0494 (-0.4513) | 0.1358 (-0.3196) |
| 2.4889 | 14000 | 0.6257 | 0.6854 | 0.0788 (-0.4616) | 0.2920 (-0.0331) | 0.0532 (-0.4475) | 0.1413 (-0.3141) |
| 2.6667 | 15000 | 0.619 | 0.6705 | 0.0741 (-0.4663) | 0.2863 (-0.0388) | 0.0645 (-0.4361) | 0.1416 (-0.3137) |
| 2.8444 | 16000 | 0.6218 | 0.6868 | 0.0750 (-0.4654) | 0.2874 (-0.0377) | 0.0583 (-0.4424) | 0.1402 (-0.3151) |
| 3.0222 | 17000 | 0.6191 | 0.6846 | 0.0768 (-0.4637) | 0.2879 (-0.0372) | 0.0393 (-0.4613) | 0.1346 (-0.3207) |
| 3.2 | 18000 | 0.5977 | 0.6846 | 0.0883 (-0.4521) | 0.2874 (-0.0376) | 0.0457 (-0.4549) | 0.1405 (-0.3149) |
| 3.3778 | 19000 | 0.5947 | 0.6938 | 0.0877 (-0.4528) | 0.2798 (-0.0452) | 0.0615 (-0.4391) | 0.1430 (-0.3124) |
| 3.5556 | 20000 | 0.5944 | 0.6860 | 0.0815 (-0.4589) | 0.2856 (-0.0395) | 0.0561 (-0.4446) | 0.1411 (-0.3143) |
| **3.7333** | **21000** | **0.5939** | **0.6887** | **0.0903 (-0.4501)** | **0.2897 (-0.0353)** | **0.0532 (-0.4474)** | **0.1444 (-0.3110)** |
| 3.9111 | 22000 | 0.5947 | 0.6908 | 0.0876 (-0.4528) | 0.2897 (-0.0353) | 0.0545 (-0.4461) | 0.1440 (-0.3114) |
| -1 | -1 | - | - | 0.0903 (-0.4501) | 0.2897 (-0.0353) | 0.0532 (-0.4474) | 0.1444 (-0.3110) |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.18
- Sentence Transformers: 5.0.0
- Transformers: 4.56.0.dev0
- PyTorch: 2.7.1+cu126
- Accelerate: 1.9.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
superskie/blockassist-bc-playful_wily_trout_1754885163
|
superskie
| 2025-08-11T04:06:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful wily trout",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T04:06:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful wily trout
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
metercai/SimpleSDXL2
|
metercai
| 2025-08-11T04:05:14Z | 5,746 | 35 |
diffusers
|
[
"diffusers",
"onnx",
"safetensors",
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2024-05-30T13:46:27Z |
---
license: apache-2.0
---
## SimpleSDXL2 - 最强中文创意生图,6G显存玩转混元、可图、SD3m和Flux!
<div align=center><img src="https://github.com/user-attachments/assets/98715a4d-9f4a-4846-ae62-eb8d69793d31"></div>
### 🚀 B站视频推荐:[6G显存玩转Flux](https://www.bilibili.com/video/BV1KJWreyEuU) : https://www.bilibili.com/video/BV1KJWreyEuU
### 🚀 飞书wiki: [《SimpleSDXL创意生图应用指南》](https://acnmokx5gwds.feishu.cn/wiki/QK3LwOp2oiRRaTkFRhYcO4LonGe), 包括如何快速下载、安装和运行,如何操作进行创意出图,在不同场景下如何使用SimpleSDXL等等。
## 🔔 最新更新 / Update
- [2024-09-16] <b>解锁Flux的lora和可图lora。自动修图开关移到增强修图标签内。添加提示面板开关,可点击触发批量通配符面板。反推提示词和图片参数提取两个标签移入参数设置栏。添加基于预置包的基础模型过滤功能。修复Comfyd引擎的跳过和中断逻辑bug。优化预置包参数和预置包导航。Flux模型自动适配硬件环境。优选Hyp8Q5KM模型,支持Flux Lora,兼具速度与质量的平衡。新增两个无缝贴图的预置包。升级comfyd到最新版。优化下载安装和启动流程,强制基础包检测,并提供模型包安装脚本。</b>
- [2024.08.20] 新架构进一步优化,提升在Windows环境的兼容性,压缩Fooocus和Comfy后端切换的资源消耗。优化支持最低6G显存的Flux模型出图,提供质量优先的Fluxdev和速度优先的Flux+两个预置包,并可根据系统资源自动适配。同步升级主线v2.5.5,优化增强修图UI,更符合Fooocus交互习惯。
- [2024.07.31] 优化了新架构,更稳定,更快速。新增对快手可图模型的支持,使SimpleSDXL2除SDXL外,以6G显卡显存同时支持: 小马v6/Playground-v2.5/SD3m/混元/可图等多种模型出图,适用更多场景。同步主线v2.5.2,并对修图界面进行优化和适配,使之更容易被中文用户理解和使用。
- [2024.06.30] 扩展架构,新增Comfy后端,全新升级SimpleSDXL2。支持SDXL、混元、SD3和Playground-v2.5本地模型,最低6G显卡内存可用,保持Fooocus简洁、高效和稳定的出图风格。新增融图打光模块,可自主生成前景及蒙版,可自动抠取产品或人物图片切换场景进行融合。升级OBP一键提示词到最新版。UI整体优化。
- [2024.05.28] 同步主线升级到v2.4.3,新增nsfw过滤等功能。
- [2024.04.23] 升级OBP到最新版,集成[Superprompt](https://huggingface.co/roborovski/superprompt-v1)超级提示词扩展,为提示词增补细节描写。新增SD3生图引擎接口,可到[stability.ai](https://stability.ai/membership)申请免费会员,获取接口密钥后无缝对接SD3新引擎生成图片。优化界面,包括将OBP和Superprompt入口整合到提示词框,新增预置包导航浮层提示、提示词框token数统计、图生图多个参数前置到操作页面等。
<b>重要:如果项目给您带来了便利和价值,不要吝惜加颗星"⭐️",促进项目更好的发展!😜<br>
Note: Please don't forget to give us a star if you like this project. Thanks! 😜</b>
## 下载安装使用,请参考wiki:[《SimpleSDXL创意生图应用指南》](https://acnmokx5gwds.feishu.cn/wiki/QK3LwOp2oiRRaTkFRhYcO4LonGe)
### 如果对旧版情有独钟,可选择不升级,运行旧版
- SimpleSDXL1独立分支的完全包,含环境、程序和默认模型,后续不增功能仅修bug : [SimpleSDXL1_win64_all.zip (30G)](https://hf-mirror.com/metercai/SimpleSDXL2/resolve/main/SimpleSDXL1_win64_all.zip)
## 什么是SimpleSDXL?/ What's SimpleSDXL?
- **化繁为简** AI的本质应该是化繁为简,让操作更简洁,让想法更易达成。SimpleSDXL保持Fooocus的易用性,以SDXL模型生态为核心,朝着开源可控,简洁易用,功能完善的方向更进一步。
- **中文适配** 中文环境与英语环境有很多差异。不仅仅在语言文字上,包括思维习惯、操作方式和网络环境都有很多不同。让中文用户使用更简单,用的更爽,也是SimpleSDXL
的原始初衷。
- **场景定制** 文生图和图生图有非常多的使用场景,需要更好的配置定制能力。SimpleSDXL以**预置包和嵌参图片**为基础,面向场景提升Fooocus的**开放性和可定制性**,发挥出SDXL的强大能力。
## SmipleSDXL2 全新架构 / New Architecture
<img width="500" align=center src="https://github.com/metercai/SimpleSDXL/assets/5652458/364df3ce-3420-4cec-b26e-f315c76b4c1e">
## 对比Fooocus的增强特色 / Enhanced features of Fooocus
在Fooocus基础上增强功能,可无缝升级,同步迭代,并行使用。而且经过了手机适配,PC和手机也可同步操作。<br>
Enhanced features base on Fooocus, seamless upgrading and dual versions available synchronous iteration and parallel use. Adapted to mobile, PC and phone can be used synchronously.
### 中英文混编提示词 / Chinese English mixed prompts
在线离线自主选择,支持翻译后再编辑,更适于提示词表达。<br>
Offline and online autonomous selection, support editing after translation, more suitable for Prompt. <br>
<img width="300" align=right src="https://github.com/metercai/SimpleSDXL/assets/5652458/707999e5-c776-4321-9048-5ad275263ff0">
- [x] **中英文混合编辑** 对提示词文本进行中英文切分后分别翻译再合并,适配提示词类的表达场景。
- [x] **在线和离线翻译器** 可自动安装离线翻译大模型和小尺寸的瘦模型,也可选择第三方翻译接口。离线模型需自身算力支持,第三方接口接入便捷成本低,但增加了接口依赖。用户可根据情况自主配置选>择。
- [x] **支持翻译后再编辑** 机器翻译的结果质量都不可控,存在翻译质量差导致生成内容偏差的现象。翻译后再编辑可以显性化翻译质量,提供用户再优化调整处理的空间。
- [x] **多大厂接口随机选** 选择国内大厂(百度、阿里和搜狗)的稳定接口,每次启动时随机选择,运行态相对固定。既避免对接口冲击又保持翻译的一致性。
- [ ] **私有翻译接口定制** 可以配置私有接口,方便对接OpenAI等大语言模型的翻译能力。
### 智能抠图生成蒙板 / Intelligent cutout generation mask
具有语义识别的多种抠图算法,可自动生成蒙板,方便生成图片的组合加工。 <br>
Multiple cropping algorithms with semantic recognition that can automatically generate masks, facilitating the combination processing of generated images.<br>
- [x] **智能算法抠图** 可以基于u2net进行图像分割,对重绘图片进行前后景分割,人物主体分割,并生成对应蒙板进行重绘。
- [x] **语义识别抠图** 可以基于bert+Sam,在语义理解基础上识别图片内容,再进行自动分割,生成蒙板后进行重绘。
- [ ] **点击识别抠图** 点击图片某个区域,基于Sam算法对点击所在主体进行自动识别和分割,生成蒙板后进行重绘。
### 通配符批量提示词 / Wildcard batch prompt words
支持通配符词组表达和触发展示,可随机批量生成同Seed下的一组图片。<br>
Supports wildcard phrase expressions and triggering display, allowing for random batch generate a set of images under the same seed.
<img width="380" align=right src="https://github.com/metercai/SimpleSDXL/assets/5652458/4b10e6de-b026-41ea-a206-77d6f9fdf1cd">
- [x] **词组语法** 支持[Words]词组,以","分割的词列表。表示在同一seed下从每个words词组抽词进行组合批量生成图片。每种组合1张图片,总量是各词组词数的乘积,以实际需要的数量为准,不受出图数量参数的限制。
- [x] **通配符组词** 用通配符定义词组,格式为:`[__wildcard__:R|Lnumber:start]` R表示随机抽,L表示按顺序抽,默认=R;number是抽取的数量,默认=1;start是在顺序抽取时从第几个开始抽,默认=1。具体语法说明见[通配符ReadMe](https://github.com/metercai/SimpleSDXL/tree/SimpleSDXL/wildcards/)
- [x] **自动触发输入** 提示词框在输入'['或'_'时可自动触发通配符输入工具,可以通过界面选择追加通配符到提示词框。
- [ ] **嵌套及动态加载** 支持通配符的多级嵌套和动态加载,增强通配符的表达能力。
- [ ] **定制和推送** 支持自主定制通配符快捷方式,并推送给朋友使用。
### 增强预置包和模型下载 / Enhanced preset and adapted for download
预置包可通过界面切换和生成,模型下载会根据IP自动选择内外源。 <br>
The preset can be switched and generated through UI, and the model download will automatically select sources based on the access IP.
- [x] **预置包导航** 将presets目录下的预置包配置文件生成顶部导航入口,户点击顶部预置包导航后,调取对应配置文件,重置出图环境参数和相关配置。
- [x] **生成预置包** 将当前出图环境参数打包保存为新的预置包,将预置包文件存入presets目录下,自动加入顶部导航。
- [x] **扩展预置参数** 扩展主线的预置包参数范围,补充开发者模式的参数,以及风格样式的定义和通配符的定义。支持的预置包参数见[预置包ReadMe](https://github.com/metercai/SimpleSDXL/tree/SimpleSDXL/presets/)
- [x] **统一模型ID和下载** 对接模型信息库,使用以模型文件哈希为基础的统一模型MUID。可自动检测预置包出图环境的可用性,缺失模型文件可自动下载补齐。
- [x] **出图保护** 当系统环境进入出图状态时,顶部导航不可点击,禁止加载预置包冲击出图环境。
### 图片集浏览和管理 / Finished image sets browsing and management
原生版仅能浏览当前生成的图片集,已生成图片管理非常简陋。 <br>
Fooocus only can browse the current generated image set. Finished images management is very simple.
- [x] **已出图片检索** 对已出图片可以按照出图日期进行检索。单天出图量过大,则根据屏幕适配分组为子目录索引,避免撑爆相册组件。
- [x] **已出图片删除** 对崩坏的已出图片可以即时删除,联动删除出图参数日志,确保图片和参数日志保持一致性。
- [x] **自动回填提示词** 在浏览已出图片集过程中,可选择自动回填图片提示词,方便提示词的对照和修改,及图片的重生。
- [x] **图片集交互优化** 已出图片集索引栏可根据状态适配,自动收起和调整,避免目录过多挤占页面空间,干扰图片生成创作。
### 嵌参图片和提参重生 / Embeded images and extract regeneration
增强的参数管理,可即时查看可嵌入图片,也可提取参数回填界面,二次生成。 <br>
Enhanced parameter management for instant viewing and embedding of images, and can also extract parameters to backfill for secondary generation.<br>
- [x] **查看参数** 从出图日志文件中提取当前图片的生成参数并用浮层完整展示。图集切换过程中,浮层内容跟随切换。
- [x] **提参重生** 用当前图片的生成参数覆盖默认预置包的参数,提示词回填,可以修改参数或提示词后重新出图。
- [x] **嵌参图片** 在系统未设置统一嵌参的情况,可以制作当前图片的参数打包嵌入,并保存到专属的嵌参图片目录。嵌参图片可通过图片描述工具提取参数形成新的出图环境配置。
### 算力云化及其他
- [x] **云化适配** 增加访问根路径启动参数,`--webroot`。当在云端服务器部署,并配置前置转发后,需要配置根路径参数,避免URL路径的混乱。
- [ ] **算力云化** 前后端分离,本机的出图算力后端可支持远程的前端出图调用,实现前端操控和出图计算的分离,让无GPU卡设备也可使用SDXL模型出图。
- [x] **主线同步** SimpleSDXL的增强代码保持良好的结构,与Fooocus主线版本保持良好的兼容性和扩展性,可以及时同步主线的新增能力和Bug修复。
## 在线交流:qq群:938075852 如何使用,有哪些新需求,进群畅聊
<div align=center><img width="250" src="https://github.com/metercai/SimpleSDXL/assets/5652458/28f8c604-79eb-467d-956c-b9137c784194"></div>
## Star History
<a href="https://star-history.com/#metercai/SimpleSDXL&Date">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=metercai/SimpleSDXL&type=Date&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=metercai/SimpleSDXL&type=Date" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=metercai/SimpleSDXL&type=Date" />
</picture>
</a>
---
|
warlockmage/blockassist-bc-bold_scurrying_robin_1754884950
|
warlockmage
| 2025-08-11T04:03:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bold scurrying robin",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T04:02:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bold scurrying robin
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seung100/blockassist-bc-monstrous_grassy_impala_1754883754
|
seung100
| 2025-08-11T04:00:49Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"monstrous grassy impala",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T04:00:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- monstrous grassy impala
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754884573
|
IvanJAjebu
| 2025-08-11T03:57:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T03:57:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nlee-208/limo_S-dsr1b_T-q32b_100
|
nlee-208
| 2025-08-11T03:54:24Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T02:54:13Z |
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: transformers
model_name: limo_S-dsr1b_T-q32b_100
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for limo_S-dsr1b_T-q32b_100
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nlee-208/limo_S-dsr1b_T-q32b_100", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nlee28/cross1/runs/szdiitzl)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
crystalline7/1262463
|
crystalline7
| 2025-08-11T03:52:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T03:51:48Z |
[View on Civ Archive](https://civitaiarchive.com/models/1206931?modelVersionId=1359218)
|
crystalline7/1349200
|
crystalline7
| 2025-08-11T03:51:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T03:51:24Z |
[View on Civ Archive](https://civitaiarchive.com/models/1165325?modelVersionId=1448005)
|
giovannidemuri/llama8b-er-afg-v12-seed2-french
|
giovannidemuri
| 2025-08-11T03:50:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T01:56:44Z |
---
library_name: transformers
license: llama3.1
base_model: meta-llama/Llama-3.1-8B
tags:
- generated_from_trainer
model-index:
- name: llama8b-er-afg-v12-seed2-french
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama8b-er-afg-v12-seed2-french
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.2
|
Qika/Qwen3-1.7B
|
Qika
| 2025-08-11T03:49:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T03:43:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
roeker/blockassist-bc-quick_wiry_owl_1754883998
|
roeker
| 2025-08-11T03:48:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T03:47:31Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1754882988
|
Sayemahsjn
| 2025-08-11T03:48:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T03:47:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754883811
|
IvanJAjebu
| 2025-08-11T03:44:35Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T03:44:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1754883628
|
roeker
| 2025-08-11T03:41:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T03:41:18Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AndanteKIT/blockassist-bc-stinging_loud_tortoise_1754877245
|
AndanteKIT
| 2025-08-11T03:39:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stinging loud tortoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T03:39:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stinging loud tortoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754883204
|
IvanJAjebu
| 2025-08-11T03:34:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T03:34:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
patent/qwen3_4b_sft.a1.2
|
patent
| 2025-08-11T03:34:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:finetune:unsloth/Qwen3-4B-Base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T02:58:23Z |
---
base_model: unsloth/Qwen3-4B-Base
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** patent
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Base
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.