modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-23 00:40:03
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 570
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-23 00:38:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sourled/Qwen3-0.6B-Gensyn-Swarm-shaggy_wild_alpaca
|
sourled
| 2025-09-23T00:22:46Z | 135 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am shaggy_wild_alpaca",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-12T22:42:55Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am shaggy_wild_alpaca
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
karunchan/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-energetic_bold_badger
|
karunchan
| 2025-09-23T00:21:57Z | 150 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am energetic_bold_badger",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-16T08:23:35Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am energetic_bold_badger
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mafimondol7539/blockassist
|
mafimondol7539
| 2025-09-22T23:14:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"nocturnal vicious worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T17:02:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- nocturnal vicious worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kmpartner/k5pcmlra3-test
|
kmpartner
| 2025-09-22T23:12:02Z | 262 | 0 |
peft
|
[
"peft",
"tensorboard",
"diffusers",
"safetensors",
"arxiv:1910.09700",
"base_model:segmind/Segmind-Vega",
"base_model:adapter:segmind/Segmind-Vega",
"region:us"
] | null | 2025-08-21T05:43:16Z |
---
library_name: peft
base_model: segmind/Segmind-Vega
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0
|
Theros/Q3-ColdBrew-8B-Base-test0-Q5_K_M-GGUF
|
Theros
| 2025-09-22T23:10:54Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:SvalTek/Q3-ColdBrew-8B-Base-test0",
"base_model:quantized:SvalTek/Q3-ColdBrew-8B-Base-test0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T23:10:26Z |
---
base_model: SvalTek/Q3-ColdBrew-8B-Base-test0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# Theros/Q3-ColdBrew-8B-Base-test0-Q5_K_M-GGUF
This model was converted to GGUF format from [`SvalTek/Q3-ColdBrew-8B-Base-test0`](https://huggingface.co/SvalTek/Q3-ColdBrew-8B-Base-test0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SvalTek/Q3-ColdBrew-8B-Base-test0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Theros/Q3-ColdBrew-8B-Base-test0-Q5_K_M-GGUF --hf-file q3-coldbrew-8b-base-test0-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Theros/Q3-ColdBrew-8B-Base-test0-Q5_K_M-GGUF --hf-file q3-coldbrew-8b-base-test0-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Theros/Q3-ColdBrew-8B-Base-test0-Q5_K_M-GGUF --hf-file q3-coldbrew-8b-base-test0-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Theros/Q3-ColdBrew-8B-Base-test0-Q5_K_M-GGUF --hf-file q3-coldbrew-8b-base-test0-q5_k_m.gguf -c 2048
```
|
negummondol579/blockassist
|
negummondol579
| 2025-09-22T23:07:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"camouflaged shy cockroach",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-18T16:54:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- camouflaged shy cockroach
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity-visual-landmark_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_20_8_all_37_0.001_1280_3
|
winnieyangwannan
| 2025-09-22T22:38:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-22T20:23:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
haznitrama/babybabellm-gpt_bert-bul
|
haznitrama
| 2025-09-22T22:18:29Z | 0 | 0 | null |
[
"pytorch",
"gpt_bert",
"custom_code",
"region:us"
] | null | 2025-09-22T22:18:01Z |
# haznitrama/babybabellm-gpt_bert-bul
Rehosted from `suchirsalhan/babybabellm-mono-bul` with standardized remote code and auto_map.
- Original `model_type` preserved.
- Default AutoModel mapping points to GPTBertForCausalLM.
- Added both causal & masked LM wrappers for evaluation.
Example:
```python
from transformers import AutoTokenizer, AutoModel
m='haznitrama/babybabellm-gpt_bert-bul'
tok=AutoTokenizer.from_pretrained(m, trust_remote_code=True)
model=AutoModel.from_pretrained(m, trust_remote_code=True)
print(model(**tok('Hello world', return_tensors='pt')).logits.shape)
```
Generation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
m='haznitrama/babybabellm-gpt_bert-bul'
tok=AutoTokenizer.from_pretrained(m, trust_remote_code=True)
model=AutoModelForCausalLM.from_pretrained(m, trust_remote_code=True)
print(tok.decode(model.generate(**tok('Hello', return_tensors='pt'), max_new_tokens=20)[0], skip_special_tokens=True))
```
|
FIM4Science/fim-sde
|
FIM4Science
| 2025-09-22T22:03:24Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"fimsde",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T06:53:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
darkvex/Qwen3-0.6B-Gensyn-Swarm-monstrous_robust_wolf
|
darkvex
| 2025-09-22T21:11:54Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am monstrous_robust_wolf",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T20:07:53Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am monstrous_robust_wolf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
RedHatAI/Qwen2.5-7B-Instruct
|
RedHatAI
| 2025-09-22T20:31:15Z | 258 | 0 | null |
[
"safetensors",
"qwen2",
"qwen",
"qwen2_5",
"qwen2_5_instruct",
"conversational",
"text-generation-inference",
"text-generation",
"zh",
"en",
"fr",
"es",
"pt",
"de",
"it",
"ru",
"ja",
"ko",
"vi",
"th",
"ar",
"id",
"tr",
"fa",
"nl",
"pl",
"cs",
"he",
"sv",
"fi",
"da",
"no",
"el",
"bg",
"uk",
"ur",
"sr",
"ms",
"zsm",
"nld",
"arxiv:2309.00071",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-05-09T23:17:13Z |
---
language:
- zh
- en
- fr
- es
- pt
- de
- it
- ru
- ja
- ko
- vi
- th
- ar
- id
- tr
- fa
- nl
- pl
- cs
- he
- sv
- fi
- da
- no
- el
- bg
- uk
- ur
- sr
- ms
- zsm
- nld
base_model:
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
tags:
- qwen
- qwen2_5
- qwen2_5_instruct
- conversational
- text-generation-inference
license: apache-2.0
license_name: apache-2.0
name: RedHatAI/Qwen2.5-7B-Instruct
description: The instruction-tuned 7B Qwen2.5 model, which has been optimized for multilingual dialogue use cases.
readme: https://huggingface.co/RedHatAI/Qwen2.5-7B-Instruct/main/README.md
tasks:
- text-to-text
provider: Alibaba Cloud
license_link: https://www.apache.org/licenses/LICENSE-2.0
validated_on:
- RHOAI 2.20
- RHAIIS 3.0
- RHELAI 1.5
---
<h1 style="display: flex; align-items: center; gap: 10px; margin: 0;">
Qwen2.5-7B-Instruct
<img src="https://www.redhat.com/rhdc/managed-files/Catalog-Validated_model_0.png" alt="Model Icon" width="40" style="margin: 0; padding: 0;" />
</h1>
<a href="https://www.redhat.com/en/products/ai/validated-models" target="_blank" style="margin: 0; padding: 0;">
<img src="https://www.redhat.com/rhdc/managed-files/Validated_badge-Dark.png" alt="Validated Badge" width="250" style="margin: 0; padding: 0;" />
</a>
<a href="https://chat.qwenlm.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
**Validated on:** RHOAI 2.20, RHAIIS 3.0, RHELAI 1.5
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 7B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
- Number of Parameters: 7.61B
- Number of Paramaters (Non-Embedding): 6.53B
- Number of Layers: 28
- Number of Attention Heads (GQA): 28 for Q and 4 for KV
- Context Length: Full 131,072 tokens and generation 8192 tokens
- Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Deployment
This model can be deployed efficiently on vLLM, Red Hat Enterprise Linux AI, and Openshift AI, as shown in the example below.
Deploy on <strong>vLLM</strong>
```python
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_id = "RedHatAI/Qwen2.5-7B-Instruct"
number_gpus = 4
sampling_params = SamplingParams(temperature=0.7, top_p=0.8, max_tokens=256)
tokenizer = AutoTokenizer.from_pretrained(model_id)
prompt = "Give me a short introduction to large language model."
llm = LLM(model=model_id, tensor_parallel_size=number_gpus)
outputs = llm.generate(prompt, sampling_params)
generated_text = outputs[0].outputs[0].text
print(generated_text)
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
<details>
<summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
```bash
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
--ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 8 \
--max-model-len 32768 \
--enforce-eager --model RedHatAI/Qwen2.5-7B-Instruct
```
See [Red Hat AI Inference Server documentation](https://docs.redhat.com/en/documentation/red_hat_ai_inference_server/) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Enterprise Linux AI</strong></summary>
```bash
# Download model from Red Hat Registry via docker
# Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified.
ilab model download --repository docker://registry.redhat.io/rhelai1/qwen2-5-7b-instruct:1.5
```
```bash
# Serve model via ilab
ilab model serve --model-path ~/.cache/instructlab/models/qwen2-5-7b-instruct
# Chat with model
ilab model chat --model ~/.cache/instructlab/models/qwen2-5-7b-instruct
```
See [Red Hat Enterprise Linux AI documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Openshift AI</strong></summary>
```python
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
annotations:
openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
labels:
opendatahub.io/dashboard: 'true'
spec:
annotations:
prometheus.io/port: '8080'
prometheus.io/path: '/metrics'
multiModel: false
supportedModelFormats:
- autoSelect: true
name: vLLM
containers:
- name: kserve-container
image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm
command:
- python
- -m
- vllm.entrypoints.openai.api_server
args:
- "--port=8080"
- "--model=/mnt/models"
- "--served-model-name={{.Name}}"
env:
- name: HF_HOME
value: /tmp/hf_home
ports:
- containerPort: 8080
protocol: TCP
```
```python
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
annotations:
openshift.io/display-name: Qwen2.5-7B-Instruct # OPTIONAL CHANGE
serving.kserve.io/deploymentMode: RawDeployment
name: Qwen2.5-7B-Instruct # specify model name. This value will be used to invoke the model in the payload
labels:
opendatahub.io/dashboard: 'true'
spec:
predictor:
maxReplicas: 1
minReplicas: 1
model:
modelFormat:
name: vLLM
name: ''
resources:
limits:
cpu: '2' # this is model specific
memory: 8Gi # this is model specific
nvidia.com/gpu: '1' # this is accelerator specific
requests: # same comment for this block
cpu: '1'
memory: 4Gi
nvidia.com/gpu: '1'
runtime: vllm-cuda-runtime # must match the ServingRuntime name above
storageUri: oci://registry.redhat.io/rhelai1/modelcar-qwen2-5-7b-instruct:1.5
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
```
```bash
# make sure first to be in the project where you want to deploy the model
# oc project <project-name>
# apply both resources to run model
# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml
# Apply the InferenceService
oc apply -f qwen-inferenceservice.yaml
```
```python
# Replace <inference-service-name> and <cluster-ingress-domain> below:
# - Run `oc get inferenceservice` to find your URL if unsure.
# Call the server using curl:
curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
-H "Content-Type: application/json" \
-d '{
"model": "Qwen2.5-7B-Instruct",
"stream": true,
"stream_options": {
"include_usage": true
},
"max_tokens": 1,
"messages": [
{
"role": "user",
"content": "How can a bee fly when its wings are so small?"
}
]
}'
```
See [Red Hat Openshift AI documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025) for more details.
</details>
### Processing Long Texts
The current `config.json` is set for context length up to 32,768 tokens.
To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
For supported frameworks, you could add the following to `config.json` to enable YaRN:
```json
{
...,
"rope_scaling": {
"factor": 4.0,
"original_max_position_embeddings": 32768,
"type": "yarn"
}
}
```
For deployment, we recommend using vLLM.
Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
We advise adding the `rope_scaling` configuration only when processing long contexts is required.
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
RedHatAI/granite-3.1-8b-instruct-FP8-dynamic
|
RedHatAI
| 2025-09-22T20:30:21Z | 226 | 1 | null |
[
"safetensors",
"granite",
"fp8",
"vllm",
"conversational",
"compressed-tensors",
"text-generation",
"en",
"de",
"es",
"fr",
"ja",
"pt",
"ar",
"cs",
"it",
"ko",
"nl",
"zh",
"base_model:ibm-granite/granite-3.1-8b-instruct",
"base_model:quantized:ibm-granite/granite-3.1-8b-instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-01-07T18:43:34Z |
---
language:
- en
- de
- es
- fr
- ja
- pt
- ar
- cs
- it
- ko
- nl
- zh
base_model:
- ibm-granite/granite-3.1-8b-instruct
pipeline_tag: text-generation
tags:
- granite
- fp8
- vllm
- conversational
- compressed-tensors
license: apache-2.0
license_name: apache-2.0
name: RedHatAI/granite-3.1-8b-instruct-FP8-dynamic
description: This model was obtained by quantizing the weights and activations of ibm-granite/granite-3.1-8b-instruct to FP8 data type.
readme: https://huggingface.co/RedHatAI/granite-3.1-8b-instruct-FP8-dynamic/main/README.md
tasks:
- text-to-text
provider: IBM
license_link: https://www.apache.org/licenses/LICENSE-2.0
validated_on:
- RHOAI 2.20
- RHAIIS 3.0
- RHELAI 1.5
---
<h1 style="display: flex; align-items: center; gap: 10px; margin: 0;">
Granite-3.1-8b-instruct-FP8-dynamic
<img src="https://www.redhat.com/rhdc/managed-files/Catalog-Validated_model_0.png" alt="Model Icon" width="40" style="margin: 0; padding: 0;" />
</h1>
<a href="https://www.redhat.com/en/products/ai/validated-models" target="_blank" style="margin: 0; padding: 0;">
<img src="https://www.redhat.com/rhdc/managed-files/Validated_badge-Dark.png" alt="Validated Badge" width="250" style="margin: 0; padding: 0;" />
</a>
## Model Overview
- **Model Architecture:** granite-3.1-8b-instruct
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:** 1/8/2025
- **Version:** 1.0
- **Validated on:** RHOAI 2.20, RHAIIS 3.0, RHELAI 1.5
- **Model Developers:** Neural Magic
Quantized version of [ibm-granite/granite-3.1-8b-instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct).
It achieves an average score of 70.57 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 70.30.
### Model Optimizations
This model was obtained by quantizing the weights and activations of [ibm-granite/granite-3.1-8b-instruct](https://huggingface.co/ibm-granite/granite-3.1-8b-instruct) to FP8 data type, ready for inference with vLLM >= 0.5.2.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%. Only the weights and activations of the linear operators within transformers blocks are quantized.
## Deployment
### Use with vLLM
This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
max_model_len, tp_size = 4096, 1
model_name = "neuralmagic/granite-3.1-8b-instruct-FP8-dynamic"
tokenizer = AutoTokenizer.from_pretrained(model_name)
llm = LLM(model=model_name, tensor_parallel_size=tp_size, max_model_len=max_model_len, trust_remote_code=True)
sampling_params = SamplingParams(temperature=0.3, max_tokens=256, stop_token_ids=[tokenizer.eos_token_id])
messages_list = [
[{"role": "user", "content": "Who are you? Please respond in pirate speak!"}],
]
prompt_token_ids = [tokenizer.apply_chat_template(messages, add_generation_prompt=True) for messages in messages_list]
outputs = llm.generate(prompt_token_ids=prompt_token_ids, sampling_params=sampling_params)
generated_text = [output.outputs[0].text for output in outputs]
print(generated_text)
```
vLLM also supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
<details>
<summary>Deploy on <strong>Red Hat AI Inference Server</strong></summary>
```bash
podman run --rm -it --device nvidia.com/gpu=all -p 8000:8000 \
--ipc=host \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_HUB_OFFLINE=0" -v ~/.cache/vllm:/home/vllm/.cache \
--name=vllm \
registry.access.redhat.com/rhaiis/rh-vllm-cuda \
vllm serve \
--tensor-parallel-size 1 \
--max-model-len 32768 \
--enforce-eager --model RedHatAI/granite-3.1-8b-instruct-FP8-dynamic
```
See [Red Hat AI Inference Server documentation](https://docs.redhat.com/en/documentation/red_hat_ai_inference_server/) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Enterprise Linux AI</strong></summary>
```bash
# Download model from Red Hat Registry via docker
# Note: This downloads the model to ~/.cache/instructlab/models unless --model-dir is specified.
ilab model download --repository docker://registry.redhat.io/rhelai1/granite-3-1-8b-instruct-fp8-dynamic:1.5
```
```bash
# Serve model via ilab
ilab model serve --model-path ~/.cache/instructlab/models/granite-3-1-8b-instruct-fp8-dynamic -- --trust-remote-code
# Chat with model
ilab model chat --model ~/.cache/instructlab/models/granite-3-1-8b-instruct-fp8-dynamic
```
See [Red Hat Enterprise Linux AI documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4) for more details.
</details>
<details>
<summary>Deploy on <strong>Red Hat Openshift AI</strong></summary>
```python
# Setting up vllm server with ServingRuntime
# Save as: vllm-servingruntime.yaml
apiVersion: serving.kserve.io/v1alpha1
kind: ServingRuntime
metadata:
name: vllm-cuda-runtime # OPTIONAL CHANGE: set a unique name
annotations:
openshift.io/display-name: vLLM NVIDIA GPU ServingRuntime for KServe
opendatahub.io/recommended-accelerators: '["nvidia.com/gpu"]'
labels:
opendatahub.io/dashboard: 'true'
spec:
annotations:
prometheus.io/port: '8080'
prometheus.io/path: '/metrics'
multiModel: false
supportedModelFormats:
- autoSelect: true
name: vLLM
containers:
- name: kserve-container
image: quay.io/modh/vllm:rhoai-2.20-cuda # CHANGE if needed. If AMD: quay.io/modh/vllm:rhoai-2.20-rocm
command:
- python
- -m
- vllm.entrypoints.openai.api_server
args:
- "--port=8080"
- "--model=/mnt/models"
- "--served-model-name={{.Name}}"
env:
- name: HF_HOME
value: /tmp/hf_home
ports:
- containerPort: 8080
protocol: TCP
```
```python
# Attach model to vllm server. This is an NVIDIA template
# Save as: inferenceservice.yaml
apiVersion: serving.kserve.io/v1beta1
kind: InferenceService
metadata:
annotations:
openshift.io/display-name: granite-3-1-8b-instruct-fp8-dynamic # OPTIONAL CHANGE
serving.kserve.io/deploymentMode: RawDeployment
name: granite-3-1-8b-instruct-fp8-dynamic # specify model name. This value will be used to invoke the model in the payload
labels:
opendatahub.io/dashboard: 'true'
spec:
predictor:
maxReplicas: 1
minReplicas: 1
model:
args:
- '--trust-remote-code'
modelFormat:
name: vLLM
name: ''
resources:
limits:
cpu: '2' # this is model specific
memory: 8Gi # this is model specific
nvidia.com/gpu: '1' # this is accelerator specific
requests: # same comment for this block
cpu: '1'
memory: 4Gi
nvidia.com/gpu: '1'
runtime: vllm-cuda-runtime # must match the ServingRuntime name above
storageUri: registry.redhat.io/rhelai1/modelcar-granite-3-1-8b-instruct-fp8-dynamic:1.5
tolerations:
- effect: NoSchedule
key: nvidia.com/gpu
operator: Exists
```
```bash
# make sure first to be in the project where you want to deploy the model
# oc project <project-name>
# apply both resources to run model
# Apply the ServingRuntime
oc apply -f vllm-servingruntime.yaml
# Apply the InferenceService
oc apply -f qwen-inferenceservice.yaml
```
```python
# Replace <inference-service-name> and <cluster-ingress-domain> below:
# - Run `oc get inferenceservice` to find your URL if unsure.
# Call the server using curl:
curl https://<inference-service-name>-predictor-default.<domain>/v1/chat/completions
-H "Content-Type: application/json" \
-d '{
"model": "granite-3-1-8b-instruct-fp8-dynamic",
"stream": true,
"stream_options": {
"include_usage": true
},
"max_tokens": 1,
"messages": [
{
"role": "user",
"content": "How can a bee fly when its wings are so small?"
}
]
}'
```
See [Red Hat Openshift AI documentation](https://docs.redhat.com/en/documentation/red_hat_openshift_ai/2025) for more details.
</details>
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
<details>
<summary>Model Creation Code</summary>
```bash
python quantize.py --model_id ibm-granite/granite-3.1-8b-instruct --save_path "output_dir/"
```
```python
import argparse
from transformers import AutoModelForCausalLM, AutoTokenizer
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
import os
def main():
parser = argparse.ArgumentParser(description='Quantize a transformer model to FP8')
parser.add_argument('--model_id', type=str, required=True,
help='The model ID from HuggingFace (e.g., "meta-llama/Meta-Llama-3-8B-Instruct")')
parser.add_argument('--save_path', type=str, default='.',
help='Custom path to save the quantized model. If not provided, will use model_name-FP8-dynamic')
args = parser.parse_args()
# Load model
model = AutoModelForCausalLM.from_pretrained(
args.model_id, device_map="auto", torch_dtype="auto", trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(args.model_id)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"]
)
# Apply quantization
oneshot(model=model, recipe=recipe)
save_path = os.path.join(args.save_path, args.model_id.split("/")[1] + "-FP8-dynamic")
os.makedirs(save_path, exist_ok=True)
# Save to disk in compressed-tensors format
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
if __name__ == "__main__":
main()
```
</details>
## Evaluation
The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard), OpenLLM Leaderboard [V2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) and on [HumanEval](https://github.com/neuralmagic/evalplus), using the following commands:
<details>
<summary>Evaluation Commands</summary>
OpenLLM Leaderboard V1:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/granite-3.1-8b-instruct-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks openllm \
--write_out \
--batch_size auto \
--output_path output_dir \
--show_config
```
OpenLLM Leaderboard V2:
```
lm_eval \
--model vllm \
--model_args pretrained="neuralmagic/granite-3.1-8b-instruct-FP8-dynamic",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=1,gpu_memory_utilization=0.8,enable_chunked_prefill=True,trust_remote_code=True \
--tasks leaderboard \
--write_out \
--batch_size auto \
--output_path output_dir \
--show_config
```
#### HumanEval
##### Generation
```
python3 codegen/generate.py \
--model neuralmagic/granite-3.1-8b-instruct-FP8-dynamic \
--bs 16 \
--temperature 0.2 \
--n_samples 50 \
--root "." \
--dataset humaneval
```
##### Sanitization
```
python3 evalplus/sanitize.py \
humaneval/neuralmagic--granite-3.1-8b-instruct-FP8-dynamic_vllm_temp_0.2
```
##### Evaluation
```
evalplus.evaluate \
--dataset humaneval \
--samples humaneval/neuralmagic--granite-3.1-8b-instruct-FP8-dynamic_vllm_temp_0.2-sanitized
```
</details>
### Accuracy
<table>
<thead>
<tr>
<th>Category</th>
<th>Metric</th>
<th>ibm-granite/granite-3.1-8b-instruct</th>
<th>neuralmagic/granite-3.1-8b-instruct-FP8-dynamic</th>
<th>Recovery (%)</th>
</tr>
</thead>
<tbody>
<!-- OpenLLM Leaderboard V1 -->
<tr>
<td rowspan="7"><b>OpenLLM V1</b></td>
<td>ARC-Challenge (Acc-Norm, 25-shot)</td>
<td>66.81</td>
<td>66.81</td>
<td>100.00</td>
</tr>
<tr>
<td>GSM8K (Strict-Match, 5-shot)</td>
<td>64.52</td>
<td>66.64</td>
<td>103.29</td>
</tr>
<tr>
<td>HellaSwag (Acc-Norm, 10-shot)</td>
<td>84.18</td>
<td>84.16</td>
<td>99.98</td>
</tr>
<tr>
<td>MMLU (Acc, 5-shot)</td>
<td>65.52</td>
<td>65.36</td>
<td>99.76</td>
</tr>
<tr>
<td>TruthfulQA (MC2, 0-shot)</td>
<td>60.57</td>
<td>60.52</td>
<td>99.92</td>
</tr>
<tr>
<td>Winogrande (Acc, 5-shot)</td>
<td>80.19</td>
<td>79.95</td>
<td>99.70</td>
</tr>
<tr>
<td><b>Average Score</b></td>
<td><b>70.30</b></td>
<td><b>70.57</b></td>
<td><b>100.39</b></td>
</tr>
<!-- OpenLLM Leaderboard V2 -->
<tr>
<td rowspan="7"><b>OpenLLM V2</b></td>
<td>IFEval (Inst Level Strict Acc, 0-shot)</td>
<td>74.10</td>
<td>73.62</td>
<td>99.35</td>
</tr>
<tr>
<td>BBH (Acc-Norm, 3-shot)</td>
<td>53.19</td>
<td>53.26</td>
<td>100.13</td>
</tr>
<tr>
<td>Math-Hard (Exact-Match, 4-shot)</td>
<td>14.77</td>
<td>16.79</td>
<td>113.66</td>
</tr>
<tr>
<td>GPQA (Acc-Norm, 0-shot)</td>
<td>31.76</td>
<td>32.58</td>
<td>102.58</td>
</tr>
<tr>
<td>MUSR (Acc-Norm, 0-shot)</td>
<td>46.01</td>
<td>47.34</td>
<td>102.89</td>
</tr>
<tr>
<td>MMLU-Pro (Acc, 5-shot)</td>
<td>35.81</td>
<td>35.72</td>
<td>99.75</td>
</tr>
<tr>
<td><b>Average Score</b></td>
<td><b>42.61</b></td>
<td><b>43.22</b></td>
<td><b>101.43</b></td>
</tr>
<!-- HumanEval -->
<tr>
<td rowspan="2"><b>Coding</b></td>
<td>HumanEval Pass@1</td>
<td>71.00</td>
<td>69.90</td>
<td><b>98.45</b></td>
</tr>
</tbody>
</table>
## Inference Performance
This model achieves up to 1.5x speedup in single-stream deployment and up to 1.1x speedup in multi-stream asynchronous deployment on L40 GPUs.
The following performance benchmarks were conducted with [vLLM](https://docs.vllm.ai/en/latest/) version 0.6.6.post1, and [GuideLLM](https://github.com/neuralmagic/guidellm).
<details>
<summary>Benchmarking Command</summary>
```
guidellm --model neuralmagic/granite-3.1-8b-instruct-FP8-dynamic --target "http://localhost:8000/v1" --data-type emulated --data "prompt_tokens=<prompt_tokens>,generated_tokens=<generated_tokens>" --max seconds 360 --backend aiohttp_server
```
</details>
### Single-stream performance (measured with vLLM version 0.6.6.post1)
<table>
<tr>
<td></td>
<td></td>
<td></td>
<th style="text-align: center;" colspan="7" >Latency (s)</th>
</tr>
<tr>
<th>GPU class</th>
<th>Model</th>
<th>Speedup</th>
<th>Code Completion<br>prefill: 256 tokens<br>decode: 1024 tokens</th>
<th>Docstring Generation<br>prefill: 768 tokens<br>decode: 128 tokens</th>
<th>Code Fixing<br>prefill: 1024 tokens<br>decode: 1024 tokens</th>
<th>RAG<br>prefill: 1024 tokens<br>decode: 128 tokens</th>
<th>Instruction Following<br>prefill: 256 tokens<br>decode: 128 tokens</th>
<th>Multi-turn Chat<br>prefill: 512 tokens<br>decode: 256 tokens</th>
<th>Large Summarization<br>prefill: 4096 tokens<br>decode: 512 tokens</th>
</tr>
<tr>
<td style="vertical-align: middle;" rowspan="3" >L40</td>
<td>granite-3.1-8b-instruct</td>
<td></td>
<td>25.1</td>
<td>3.2</td>
<td>25.3</td>
<td>3.2</td>
<td>3.2</td>
<td>6.3</td>
<td>13.4</td>
</tr>
<tr>
<td>granite-3.1-8b-instruct-FP8-dynamic<br>(this model)</td>
<td>1.47</td>
<td>16.8</td>
<td>2.2</td>
<td>17.1</td>
<td>2.2</td>
<td>2.1</td>
<td>4.2</td>
<td>9.3</td>
</tr>
<tr>
<td>granite-3.1-8b-instruct-quantized.w4a16</td>
<td>2.72</td>
<td>8.9</td>
<td>1.2</td>
<td>9.2</td>
<td>1.2</td>
<td>1.1</td>
<td>2.3</td>
<td>5.3</td>
</tr>
</table>
### Multi-stream asynchronous performance (measured with vLLM version 0.6.6.post1)
<table>
<tr>
<td></td>
<td></td>
<td></td>
<th style="text-align: center;" colspan="7" >Maximum Throughput (Queries per Second)</th>
</tr>
<tr>
<th>GPU class</th>
<th>Model</th>
<th>Speedup</th>
<th>Code Completion<br>prefill: 256 tokens<br>decode: 1024 tokens</th>
<th>Docstring Generation<br>prefill: 768 tokens<br>decode: 128 tokens</th>
<th>Code Fixing<br>prefill: 1024 tokens<br>decode: 1024 tokens</th>
<th>RAG<br>prefill: 1024 tokens<br>decode: 128 tokens</th>
<th>Instruction Following<br>prefill: 256 tokens<br>decode: 128 tokens</th>
<th>Multi-turn Chat<br>prefill: 512 tokens<br>decode: 256 tokens</th>
<th>Large Summarization<br>prefill: 4096 tokens<br>decode: 512 tokens</th>
</tr>
<tr>
<td style="vertical-align: middle;" rowspan="3" >L40</td>
<td>granite-3.1-8b-instruct</td>
<td></td>
<td>1.4</td>
<td>7.8</td>
<td>1.1</td>
<td>6.2</td>
<td>15.5</td>
<td>6.0</td>
<td>0.7</td>
</tr>
<tr>
<td>granite-3.1-8b-instruct-FP8-dynamic<br>(this model)</td>
<td>1.12</td>
<td>2.1</td>
<td>7.4</td>
<td>1.3</td>
<td>5.9</td>
<td>15.3</td>
<td>6.9</td>
<td>0.8</td>
</tr>
<tr>
<td>granite-3.1-2b-instruct-quantized.w4a16</td>
<td>1.29</td>
<td>2.4</td>
<td>8.9</td>
<td>1.4</td>
<td>7.1</td>
<td>17.8</td>
<td>7.8</td>
<td>1.0</td>
</tr>
</table>
|
amethyst9/2064096
|
amethyst9
| 2025-09-22T19:52:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-22T19:52:18Z |
[View on Civ Archive](https://civarchive.com/models/1918035?modelVersionId=2170846)
|
dorangao/landify-chatbot-tool-expert-v1-merged
|
dorangao
| 2025-09-22T19:00:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-09-22T10:21:01Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
litert-community/TinyLlama-1.1B-Chat-v1.0
|
litert-community
| 2025-09-22T18:59:20Z | 152 | 0 |
litert-lm
|
[
"litert-lm",
"tflite",
"chat",
"text-generation",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-04-30T21:19:49Z |
---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
pipeline_tag: text-generation
library_name: litert-lm
tags:
- chat
---
# litert-community/TinyLlama-1.1B-Chat-v1.0
This model provides a few variants of
[TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) that are ready for
deployment on Android using the
[LiteRT (fka TFLite) stack](https://ai.google.dev/edge/litert) and
[MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference).
## Use the models
### Colab
*Disclaimer: The target deployment surface for the LiteRT models is
Android/iOS/Web and the stack has been optimized for performance on these
targets. Trying out the system in Colab is an easier way to familiarize yourself
with the LiteRT stack, with the caveat that the performance (memory and latency)
on Colab could be much worse than on a local device.*
[](https://colab.research.google.com/#fileId=https://huggingface.co/litert-community/TinyLlama-1.1B-Chat-v1.0/blob/main/notebook.ipynb)
### Android
* Download and install
[the apk](https://github.com/google-ai-edge/mediapipe-samples/releases/latest/download/llm_inference-debug.apk).
* Follow the instructions in the app.
To build the demo app from source, please follow the
[instructions](https://github.com/google-ai-edge/mediapipe-samples/blob/main/examples/llm_inference/android/README.md)
from the GitHub repository.
## Performance
### Android
Note that all benchmark stats are from a Samsung S24 Ultra with
1280 KV cache size with multiple prefill signatures enabled.
<table border="1">
<tr>
<th></th>
<th>Backend</th>
<th>Prefill (tokens/sec)</th>
<th>Decode (tokens/sec)</th>
<th>Time-to-first-token (sec)</th>
<th>Memory (RSS in MB)</th>
<th>Model size (MB)</th>
</tr>
<tr>
<td>fp32 (baseline)</td>
<td>cpu</td>
<td><p style="text-align: right">51.14 tk/s</p></td>
<td><p style="text-align: right">9.23 tk/s</p></td>
<td><p style="text-align: right">9.25 s</p></td>
<td><p style="text-align: right">6,155 MB</p></td>
<td><p style="text-align: right">4,208 MB</p></td>
</tr>
<tr>
<td>dynamic_int8</td>
<td>cpu</td>
<td><p style="text-align: right">156.10 tk/s</p></td>
<td><p style="text-align: right">26.34 tk/s</p></td>
<td><p style="text-align: right">3.80 s</p></td>
<td><p style="text-align: right">2,359 MB</p></td>
<td><p style="text-align: right">1,095 MB</p></td>
</tr>
</table>
* Model Size: measured by the size of the .tflite flatbuffer (serialization
format for LiteRT models)
* Memory: indicator of peak RAM usage
* The inference on CPU is accelerated via the LiteRT
[XNNPACK](https://github.com/google/XNNPACK) delegate with 4 threads
* Benchmark is done assuming XNNPACK cache is enabled
* dynamic_int8: quantized model with int8 weights and float activations.
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_16_4_all_5_0.001_1280_3
|
winnieyangwannan
| 2025-09-22T18:23:01Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T23:19:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF
|
mradermacher
| 2025-09-22T18:04:20Z | 3,424 | 0 |
transformers
|
[
"transformers",
"gguf",
"trl",
"text-generation-inference",
"math",
"science",
"code",
"v3.1",
"stem",
"en",
"base_model:prithivMLmods/Capella-Qwen3-DS-V3.1-4B",
"base_model:quantized:prithivMLmods/Capella-Qwen3-DS-V3.1-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-08T03:37:29Z |
---
base_model: prithivMLmods/Capella-Qwen3-DS-V3.1-4B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- trl
- text-generation-inference
- math
- science
- code
- v3.1
- stem
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/prithivMLmods/Capella-Qwen3-DS-V3.1-4B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Capella-Qwen3-DS-V3.1-4B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Capella-Qwen3-DS-V3.1-4B-i1-GGUF/resolve/main/Capella-Qwen3-DS-V3.1-4B.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Rishit-3/blockassist
|
Rishit-3
| 2025-09-22T18:04:15Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scampering endangered donkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T17:47:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scampering endangered donkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
litert-community/Qwen2.5-1.5B-Instruct
|
litert-community
| 2025-09-22T17:57:04Z | 409 | 23 |
litert-lm
|
[
"litert-lm",
"tflite",
"chat",
"text-generation",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-04-30T19:19:22Z |
---
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
pipeline_tag: text-generation
library_name: litert-lm
tags:
- chat
---
# litert-community/Qwen2.5-1.5B-Instruct
This model provides a few variants of
[Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) that are ready for
deployment on Android using the
[LiteRT (fka TFLite) stack](https://ai.google.dev/edge/litert),
[MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference) and
[LiteRT-LM](https://github.com/google-ai-edge/LiteRT-LM).
## Use the models
### Colab
*Disclaimer: The target deployment surface for the LiteRT models is
Android/iOS/Web and the stack has been optimized for performance on these
targets. Trying out the system in Colab is an easier way to familiarize yourself
with the LiteRT stack, with the caveat that the performance (memory and latency)
on Colab could be much worse than on a local device.*
[](https://colab.research.google.com/#fileId=https://huggingface.co/litert-community/Qwen2.5-1.5B-Instruct/blob/main/notebook.ipynb)
### Android
#### Edge Gallery App
* Download or build the [app](https://github.com/google-ai-edge/gallery?tab=readme-ov-file#-get-started-in-minutes) from GitHub.
* Install the [app](https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery&pli=1) from Google Play.
* Follow the instructions in the app.
#### LLM Inference API
* Download and install
[the apk](https://github.com/google-ai-edge/gallery/releases/latest/download/ai-edge-gallery.apk).
* Follow the instructions in the app.
To build the demo app from source, please follow the [instructions](https://github.com/google-ai-edge/gallery/blob/main/README.md)
from the GitHub repository.
### iOS
* Clone the [MediaPipe samples](https://github.com/google-ai-edge/mediapipe-samples)
repository and follow the [instructions](https://github.com/google-ai-edge/mediapipe-samples/tree/main/examples/llm_inference/ios/README.md)
to build the LLM Inference iOS Sample App using XCode.
* Run the app via the iOS simulator or deploy to an iOS device.
## Performance
### Android
Note that all benchmark stats are from a Samsung S25 Ultra and multiple prefill signatures enabled.
<table border="1">
<tr>
<th style="text-align: left">Backend</th>
<th style="text-align: left">Quantization scheme</th>
<th style="text-align: left">Context length</th>
<th style="text-align: left">Prefill (tokens/sec)</th>
<th style="text-align: left">Decode (tokens/sec)</th>
<th style="text-align: left">Time-to-first-token (sec)</th>
<th style="text-align: left">Model size (MB)</th>
<th style="text-align: left">Peak RSS Memory (MB)</th>
<th style="text-align: left">GPU Memory (RSS in MB)</th>
<th></th>
</tr>
<tr>
<td><p style="text-align: left">CPU</p></td>
<td><p style="text-align: left">fp32 (baseline)</p></td>
<td><p style="text-align: right">1280</p></td>
<td><p style="text-align: right">49.50</p></td>
<td><p style="text-align: right">10 tk/s</p></td>
<td><p style="text-align: right">21.25 s</p></td>
<td><p style="text-align: right">6182 MB</p></td>
<td><p style="text-align: right">6254 MB</p></td>
<td><p style="text-align: right">N/A</p></td>
<td><p style="text-align: left"><a style="text-decoration: none" href="https://huggingface.co/litert-community/Qwen2.5-1.5B-Instruct/resolve/main/Qwen2.5-1.5B-Instruct_multi-prefill-seq_f32_ekv1280.task">🔗</a></p></td>
</tr>
<tr>
<td><p style="text-align: left">CPU</p></td>
<td><p style="text-align: left">dynamic_int8</p></td>
<td><p style="text-align: right">1280</p></td>
<td><p style="text-align: right">297.58</p></td>
<td><p style="text-align: right">34.25 tk/s</p></td>
<td><p style="text-align: right">3.71 s</p></td>
<td><p style="text-align: right">1598 MB</p></td>
<td><p style="text-align: right">1997 MB</p></td>
<td><p style="text-align: right">N/A</p></td>
<td><p style="text-align: left"><a style="text-decoration: none" href="https://huggingface.co/litert-community/Qwen2.5-1.5B-Instruct/resolve/main/Qwen2.5-1.5B-Instruct_multi-prefill-seq_q8_ekv1280.task">🔗</a></p></td>
</tr>
<tr>
<td><p style="text-align: left">CPU</p></td>
<td><p style="text-align: left">dynamic_int8</p></td>
<td><p style="text-align: right">4096</p></td>
<td><p style="text-align: right">162.72 tk/s</p></td>
<td><p style="text-align: right">26.06 tk/s</p></td>
<td><p style="text-align: right">6.57 s</p></td>
<td><p style="text-align: right">1598 MB</p></td>
<td><p style="text-align: right">2216 MB</p></td>
<td><p style="text-align: right">N/A</p></td>
<td><p style="text-align: left"><a style="text-decoration: none" href="https://huggingface.co/litert-community/Qwen2.5-1.5B-Instruct/resolve/main/Qwen2.5-1.5B-Instruct_multi-prefill-seq_q8_ekv4096.task">🔗</a></p></td>
</tr>
<tr>
<td><p style="text-align: left">GPU</p></td>
<td><p style="text-align: left">dynamic_int8</p></td>
<td><p style="text-align: right">1280</p></td>
<td><p style="text-align: right">1667.75 tk/s</p></td>
<td><p style="text-align: right">30.88 tk/s</p></td>
<td><p style="text-align: right">3.63 s</p></td>
<td><p style="text-align: right">1598 MB</p></td>
<td><p style="text-align: right">1846 MB</p></td>
<td><p style="text-align: right">1505 MB</p></td>
<td><p style="text-align: left"><a style="text-decoration: none" href="https://huggingface.co/litert-community/Qwen2.5-1.5B-Instruct/resolve/main/Qwen2.5-1.5B-Instruct_multi-prefill-seq_q8_ekv1280.task">🔗</a></p></td>
</tr>
<tr>
<td><p style="text-align: left">GPU</p></td>
<td><p style="text-align: left">dynamic_int8</p></td>
<td><p style="text-align: right">4096</p></td>
<td><p style="text-align: right">933.45 tk/s</p></td>
<td><p style="text-align: right">27.30 tk/s</p></td>
<td><p style="text-align: right">4.77 s</p></td>
<td><p style="text-align: right">1598 MB</p></td>
<td><p style="text-align: right">1869 MB</p></td>
<td><p style="text-align: right">1505 MB</p></td>
<td><p style="text-align: left"><a style="text-decoration: none" href="https://huggingface.co/litert-community/Qwen2.5-1.5B-Instruct/resolve/main/Qwen2.5-1.5B-Instruct_multi-prefill-seq_q8_ekv4096.task">🔗</a></p></td>
</tr>
</table>
* For the list of supported quantization schemes see [supported-schemes](https://github.com/google-ai-edge/ai-edge-torch/tree/main/ai_edge_torch/generative/quantize#supported-schemes).
For these models, we are using prefill signature lengths of 32, 128, 512 and 1280.
* Model Size: measured by the size of the .tflite flatbuffer (serialization
format for LiteRT models)
* Memory: indicator of peak RAM usage
* The inference on CPU is accelerated via the LiteRT
[XNNPACK](https://github.com/google/XNNPACK) delegate with 4 threads
* Benchmark is run with cache enabled and initialized. During the first run,
the time to first token may differ.
|
litert-community/embeddinggemma-300m
|
litert-community
| 2025-09-22T17:53:41Z | 912 | 11 |
sentence-transformers
|
[
"sentence-transformers",
"tflite",
"sentence-similarity",
"feature-extraction",
"text-embeddings-inference",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-03T19:22:37Z |
---
license: gemma
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- text-embeddings-inference
extra_gated_heading: Access EmbeddingGemma on Hugging Face
extra_gated_prompt: To access EmbeddingGemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# litert-community/embeddinggemma-300m
Main Model Card: [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m)
## Overview
This model card provides a few variants of the EmbeddingGemma model that are ready for deployment on Android and iOS using [LiteRT](https://ai.google.dev/edge/litert), or on Android via the [Google AI Edge RAG Library](https://ai.google.dev/edge/mediapipe/solutions/genai/rag).
## Use the models
### LiteRT
* Try out the demo [example](https://github.com/google-ai-edge/LiteRT/tree/main/litert/samples/semantic_similarity) on GitHub.
### RAG
* Try out the EmbeddingGemma model in the in the [Google AI Edge RAG Library](https://ai.google.dev/edge/mediapipe/solutions/genai/rag). You can find the SDK on [GitHub](https://github.com/google-ai-edge/ai-edge-apis/tree/main/local_agents/rag) or follow our [Android guide](https://ai.google.dev/edge/mediapipe/solutions/genai/rag/android) to install directly from Maven. We have also published a [sample app](https://github.com/google-ai-edge/ai-edge-apis/tree/main/examples/rag).
* Use the sentencepiece model as the tokenizer for the EmbeddingGemma model.
## Performance
### Android
Note that all benchmark stats are from a Samsung S25 Ultra.
<table border="1">
<tr>
<th>Backend</th>
<th>Quantization</th>
<th>Max sequence length</th>
<th>Init time (ms)</th>
<th>Inference time (ms)</th>
<th>Memory (RSS in MB)</th>
<th>Model size (MB)</th>
</tr>
<tr>
<td><p style="text-align: right">GPU</p></td>
<td><p style="text-align: right">Mixed Precision*</p></td>
<td><p style="text-align: right">256</p></td>
<td><p style="text-align: right">1175</p></td>
<td><p style="text-align: right">64</p></td>
<td><p style="text-align: right">762</p></td>
<td><p style="text-align: right">179</p></td>
</tr>
<tr>
<td><p style="text-align: right">GPU</p></td>
<td><p style="text-align: right">Mixed Precision*</p></td>
<td><p style="text-align: right">512</p></td>
<td><p style="text-align: right">1445</p></td>
<td><p style="text-align: right">119</p></td>
<td><p style="text-align: right">762</p></td>
<td><p style="text-align: right">179</p></td>
</tr>
<tr>
<td><p style="text-align: right">GPU</p></td>
<td><p style="text-align: right">Mixed Precision*</p></td>
<td><p style="text-align: right">1024</p></td>
<td><p style="text-align: right">1545</p></td>
<td><p style="text-align: right">241</p></td>
<td><p style="text-align: right">771</p></td>
<td><p style="text-align: right">183</p></td>
</tr>
<tr>
<td><p style="text-align: right">GPU</p></td>
<td><p style="text-align: right">Mixed Precision*</p></td>
<td><p style="text-align: right">2048</p></td>
<td><p style="text-align: right">1707</p></td>
<td><p style="text-align: right">683</p></td>
<td><p style="text-align: right">786</p></td>
<td><p style="text-align: right">196</p></td>
</tr>
<tr>
<td><p style="text-align: right">CPU</p></td>
<td><p style="text-align: right">Mixed Precision*</p></td>
<td><p style="text-align: right">256</p></td>
<td><p style="text-align: right">17.6</p></td>
<td><p style="text-align: right">66</p></td>
<td><p style="text-align: right">110</p></td>
<td><p style="text-align: right">179</p></td>
</tr>
<tr>
<td><p style="text-align: right">CPU</p></td>
<td><p style="text-align: right">Mixed Precision*</p></td>
<td><p style="text-align: right">512</p></td>
<td><p style="text-align: right">24.9</p></td>
<td><p style="text-align: right">169</p></td>
<td><p style="text-align: right">123</p></td>
<td><p style="text-align: right">179</p></td>
</tr>
<tr>
<td><p style="text-align: right">CPU</p></td>
<td><p style="text-align: right">Mixed Precision*</p></td>
<td><p style="text-align: right">1024</p></td>
<td><p style="text-align: right">35.4</p></td>
<td><p style="text-align: right">549</p></td>
<td><p style="text-align: right">169</p></td>
<td><p style="text-align: right">183</p></td>
</tr>
<tr>
<td><p style="text-align: right">CPU</p></td>
<td><p style="text-align: right">Mixed Precision*</p></td>
<td><p style="text-align: right">2048</p></td>
<td><p style="text-align: right">35.8</p></td>
<td><p style="text-align: right">2455</p></td>
<td><p style="text-align: right">333</p></td>
<td><p style="text-align: right">196</p></td>
</tr>
</table>
*Mixed Precision refers to per-channel quantization with int4 for embeddings, feedforward, and projection layers, and int8 for attention (e4_a8_f4_p4).
Notes:
* Init time: the cost paid once per application initialization – subsequent inferences do not pay this cost
* Memory: indicator of peak RAM usage
* Model Size: measured by the size of the .tflite flatbuffer (serialization format for LiteRT models)
* The inference on CPU is accelerated via the LiteRT [XNNPACK](https://github.com/google/XNNPACK) delegate with 4 threads
* Benchmark is run with cache enabled and initialized. During the first run, the latency may differ.
|
sidhantoon/Moji_v23
|
sidhantoon
| 2025-09-22T17:47:53Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-22T17:43:58Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
MRockatansky/my-awesome-model
|
MRockatansky
| 2025-09-22T17:46:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-09-22T17:46:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
YAM57/blip-bar-graphs-with-variations
|
YAM57
| 2025-09-22T17:43:43Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T17:43:43Z |
---
license: apache-2.0
---
|
mradermacher/ministral-8B-Instruct-2410-abliterated-GGUF
|
mradermacher
| 2025-09-22T17:39:17Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"abliterated",
"uncensored",
"ministral",
"mistral",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ru",
"zh",
"ja",
"dataset:N/A",
"base_model:realoperator42/ministral-8B-Instruct-2410-abliterated",
"base_model:quantized:realoperator42/ministral-8B-Instruct-2410-abliterated",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T10:33:36Z |
---
base_model: realoperator42/ministral-8B-Instruct-2410-abliterated
datasets:
- N/A
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- abliterated
- uncensored
- ministral
- mistral
- text-generation
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/realoperator42/ministral-8B-Instruct-2410-abliterated
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#ministral-8B-Instruct-2410-abliterated-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ministral-8B-Instruct-2410-abliterated-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ministral-8B-Instruct-2410-abliterated-GGUF/resolve/main/ministral-8B-Instruct-2410-abliterated.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ministral-8B-Instruct-2410-abliterated-GGUF/resolve/main/ministral-8B-Instruct-2410-abliterated.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ministral-8B-Instruct-2410-abliterated-GGUF/resolve/main/ministral-8B-Instruct-2410-abliterated.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ministral-8B-Instruct-2410-abliterated-GGUF/resolve/main/ministral-8B-Instruct-2410-abliterated.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ministral-8B-Instruct-2410-abliterated-GGUF/resolve/main/ministral-8B-Instruct-2410-abliterated.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ministral-8B-Instruct-2410-abliterated-GGUF/resolve/main/ministral-8B-Instruct-2410-abliterated.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ministral-8B-Instruct-2410-abliterated-GGUF/resolve/main/ministral-8B-Instruct-2410-abliterated.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ministral-8B-Instruct-2410-abliterated-GGUF/resolve/main/ministral-8B-Instruct-2410-abliterated.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ministral-8B-Instruct-2410-abliterated-GGUF/resolve/main/ministral-8B-Instruct-2410-abliterated.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ministral-8B-Instruct-2410-abliterated-GGUF/resolve/main/ministral-8B-Instruct-2410-abliterated.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ministral-8B-Instruct-2410-abliterated-GGUF/resolve/main/ministral-8B-Instruct-2410-abliterated.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ministral-8B-Instruct-2410-abliterated-GGUF/resolve/main/ministral-8B-Instruct-2410-abliterated.f16.gguf) | f16 | 16.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
LandCruiser/sn21_omg3_2309_1
|
LandCruiser
| 2025-09-22T17:13:37Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-22T17:08:16Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Pacovit/Qwen3-0.6B-Gensyn-Swarm-vigilant_prehistoric_clam
|
Pacovit
| 2025-09-22T17:13:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am vigilant_prehistoric_clam",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T09:44:13Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am vigilant_prehistoric_clam
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
valleriee/pii-model-6-chat
|
valleriee
| 2025-09-22T17:12:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T17:04:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758560889
|
poolkiltzn
| 2025-09-22T17:09:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T17:09:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jshrdt/lowhipa-base-cv
|
jshrdt
| 2025-09-22T16:42:44Z | 7 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"automatic-speech-recognition",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-base",
"base_model:adapter:openai/whisper-base",
"region:us"
] |
automatic-speech-recognition
| 2025-09-12T14:01:22Z |
---
base_model: openai/whisper-base
library_name: peft
model-index:
- name: lowhipa-base-cv
results: []
datasets:
- mozilla-foundation/common_voice_11_0
pipeline_tag: automatic-speech-recognition
---
# lowhipa-base-cv
This Whisper-for-IPA (WhIPA) model adapter is a PEFT LoRA fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on a subset of the CommonVoice11 dataset (1k samples each from Greek, Finnish, Hungarian, Japanese, Maltese, Polish, Tamil) with G2P-based IPA transcriptions.
## Model description
For deployment and description, please refer to https://github.com/jshrdt/whipa.
```
from transformers import WhisperForConditionalGeneration, WhisperTokenizer, WhisperProcessor
from peft import PeftModel
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-base", task="transcribe")
tokenizer.add_special_tokens({"additional_special_tokens": ["<|ip|>"] + tokenizer.all_special_tokens})
base_model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base")
base_model.generation_config.lang_to_id["<|ip|>"] = tokenizer.convert_tokens_to_ids(["<|ip|>"])[0]
base_model.resize_token_embeddings(len(tokenizer))
whipa_model = PeftModel.from_pretrained(base_model, "jshrdt/lowhipa-base-cv")
whipa_model.generation_config.language = "<|ip|>"
whipa_model.generation_config.task = "transcribe"
whipa_processor = WhisperProcessor.from_pretrained("openai/whisper-base", task="transcribe")
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
### Training results
### Framework versions
- PEFT 0.15.1
|
jshrdt/lowhipa-large-comb
|
jshrdt
| 2025-09-22T16:31:42Z | 2 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"automatic-speech-recognition",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:tunis-ai/arabic_speech_corpus",
"dataset:THCHS-30",
"arxiv:1512.01882",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2025-09-12T14:21:18Z |
---
library_name: peft
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
- tunis-ai/arabic_speech_corpus
- THCHS-30
model-index:
- name: lowhipa-large-comb
results: []
pipeline_tag: automatic-speech-recognition
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lowhipa-large-comb
This Whisper-for-IPA (WhIPA) model adapter is a PEFT LoRA fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on a subset of:
- CommonVoice11 dataset (1k samples each from Greek, Finnish, Hungarian, Japanese, Maltese, Polish, Tamil) with G2P-based IPA transcriptions
- Mandarin THCHS-30 database (https://arxiv.org/pdf/1512.01882) with IPA transcriptions by Taubert (2023, https://zenodo.org/records/7528596) (1k samples)
- Arabic Speech Corpus (https://en.arabicspeechcorpus.com) with custom IPA transcriptions transliterated from the provided Buckwalter transcriptions (1k samples)
## Model description
For deployment and description, please refer to https://github.com/jshrdt/whipa.
```
from transformers import WhisperForConditionalGeneration, WhisperTokenizer, WhisperProcessor
from peft import PeftModel
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-large-v2", task="transcribe")
tokenizer.add_special_tokens({"additional_special_tokens": ["<|ip|>"] + tokenizer.all_special_tokens})
base_model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large-v2")
base_model.generation_config.lang_to_id["<|ip|>"] = tokenizer.convert_tokens_to_ids(["<|ip|>"])[0]
base_model.resize_token_embeddings(len(tokenizer))
whipa_model = PeftModel.from_pretrained(base_model, "jshrdt/lowhipa-large-comb")
whipa_model.generation_config.language = "<|ip|>"
whipa_model.generation_config.task = "transcribe"
whipa_processor = WhisperProcessor.from_pretrained("openai/whisper-large-v2", task="transcribe")
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
### Training results
| Training Loss | Epoch | Validation Loss |
|:-------------:|:-------:|:---------------:|
| 0.7537 | 2.0323 | 0.5796585083007812 |
| 0.2638 | 4.0645 | 0.4017384648323059 |
| 0.1532 | 6.0968 | 0.40539106726646423 |
| 0.0909 | 8.1290 | 0.4510815143585205 |
| 0.0535 | 10.1613 | 0.4732421040534973 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Bluebomber182/seed-vc-bigvgan_v2_24khz_100band_256x_model
|
Bluebomber182
| 2025-09-22T16:28:41Z | 0 | 0 | null |
[
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-09-08T23:39:20Z |
---
license: cc-by-nc-4.0
---
This was trained on the Emilia Dataset and trimed down Emilia-YODAS and AniSpeech datasets that pass the 3.6 mos score threshold. This has f0 condition set to true so you can app_svc.py on it. Note it has an inference problem with any of the checkpoints.
|
aamijar/Llama-2-7b-hf-dora-r8-mrpc-epochs1
|
aamijar
| 2025-09-22T16:19:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T16:19:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EthanRhys/Silver-Masumi-Mutsuda
|
EthanRhys
| 2025-09-22T16:15:41Z | 0 | 0 | null |
[
"license:openrail++",
"region:us"
] | null | 2025-09-22T16:12:47Z |
---
license: openrail++
---
|
Vivek23454/blockassist
|
Vivek23454
| 2025-09-22T16:13:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"swift savage otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T13:27:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- swift savage otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NexaAI/llama3.2-3B-intel-npu
|
NexaAI
| 2025-09-22T16:08:14Z | 0 | 0 | null |
[
"llama",
"region:us"
] | null | 2025-09-21T15:30:46Z |
# Llama-3.2-3B
Run **Llama-3.2-3B** optimized for **Intel NPUs** with [nexaSDK](https://sdk.nexa.ai).
## Quickstart
1. **Install nexaSDK** and create a free account at [sdk.nexa.ai](https://sdk.nexa.ai)
2. **Activate your device** with your access token:
```bash
nexa config set license '<access_token>'
```
3. Run the model on Qualcomm NPU in one line:
```bash
nexa infer NexaAI/llama3.2-3B-intel-npu
```
## Model Description
**Llama-3.2-3B** is a compact member of the Llama 3.2 family, designed to provide strong general-purpose language modeling in a lightweight 3B parameter footprint.
It balances efficiency with capability, making it well-suited for edge devices, prototyping, and applications where latency and resource constraints are critical.
## Features
- **Lightweight architecture**: 3B parameters optimized for fast inference and low memory usage.
- **Instruction-following**: Tuned for prompts, Q&A, and step-by-step reasoning.
- **Multilingual capabilities**: Covers a wide range of global languages at smaller scale.
- **Deployment flexibility**: Runs efficiently on consumer hardware and server environments.
## Use Cases
- Conversational assistants and chatbots.
- Educational tools and lightweight tutoring systems.
- Prototyping and experimentation with large language models on limited resources.
- Applications where cost or latency is a priority over sheer scale.
## Inputs and Outputs
**Input**: Text prompts—questions, commands, or code snippets.
**Output**: Natural language responses including answers, explanations, or structured outputs.
## License
- Licensed under **Meta Llama 3.2 Community License**
## References
- Model card: [https://huggingface.co/meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B)
|
HoyerChou/YOTO
|
HoyerChou
| 2025-09-22T16:02:22Z | 0 | 0 | null |
[
"onnx",
"arxiv:2501.14208",
"license:apache-2.0",
"region:us"
] | null | 2025-01-28T01:59:58Z |
---
license: apache-2.0
---
Preprocessed dataset for my paper "*You Only Teach Once: Learn One-Shot Bimanual Robotic Manipulation from Video Demonstrations*" [[arXiv](https://arxiv.org/abs/2501.14208)] / [[Project](https://hnuzhy.github.io/projects/YOTO/)] / [[Code](https://github.com/hnuzhy/YOTO)]
Please refer [AugDemos](https://github.com/hnuzhy/YOTO/tree/main/AugDemos) and [BiDP](https://github.com/hnuzhy/YOTO/tree/main/BiDP) for the usage of these uploaded datasets and pretrained models.
* Raw left/right RGB images and dual-arm robot actions: [drawer.zip](https://huggingface.co/HoyerChou/YOTO/blob/main/drawer.zip), [pouring.zip](https://huggingface.co/HoyerChou/YOTO/blob/main/pouring.zip), [unscrew.zip](https://huggingface.co/HoyerChou/YOTO/blob/main/unscrew.zip), [uncover.zip](https://huggingface.co/HoyerChou/YOTO/blob/main/uncover.zip), [openbox.zip](https://huggingface.co/HoyerChou/YOTO/blob/main/openbox.zip).
* RGB images of segmented manipulated objects: [drawer_masks.zip](https://huggingface.co/HoyerChou/YOTO/blob/main/drawer_masks.zip), [pouring_masks.zip](https://huggingface.co/HoyerChou/YOTO/blob/main/pouring_masks.zip), [unscrew_masks.zip](https://huggingface.co/HoyerChou/YOTO/blob/main/unscrew_masks.zip), [uncover_masks.zip](https://huggingface.co/HoyerChou/YOTO/blob/main/uncover_masks.zip), [openbox_masks.zip](https://huggingface.co/HoyerChou/YOTO/blob/main/openbox_masks.zip).
* Preprocessed datasets without augmentation: [drawer_preprocessed.json](https://huggingface.co/HoyerChou/YOTO/blob/main/drawer_preprocessed.json), [pouring_preprocessed.json](https://huggingface.co/HoyerChou/YOTO/blob/main/pouring_preprocessed.json), [unscrew_preprocessed.json](https://huggingface.co/HoyerChou/YOTO/blob/main/unscrew_preprocessed.json), [uncover_preprocessed.json](https://huggingface.co/HoyerChou/YOTO/blob/main/uncover_preprocessed.json), [openbox_preprocessed.json](https://huggingface.co/HoyerChou/YOTO/blob/main/openbox_preprocessed.json).
* Preprocessed datasets with 100x augmentation: [drawer_preprocessed_aug100x_1~4.json](https://huggingface.co/HoyerChou/YOTO/tree/main/), [pouring_preprocessed_aug100x_1~3.json](https://huggingface.co/HoyerChou/YOTO/tree/main/), [unscrew_preprocessed_aug100x.json](https://huggingface.co/HoyerChou/YOTO/blob/main/unscrew_preprocessed_aug100x.json), [uncover_preprocessed_aug100x.json](https://huggingface.co/HoyerChou/YOTO/blob/main/uncover_preprocessed_aug100x.json), [openbox_preprocessed_aug100x.json](https://huggingface.co/HoyerChou/YOTO/blob/main/openbox_preprocessed_aug100x.json).
* Pretrained noaug BiDP models: [bidp_drawer_noaug_ckpt01999.pth](https://huggingface.co/HoyerChou/YOTO/blob/main/bidp_drawer_noaug_ckpt01999.pth), [bidp_pouring_noaug_ckpt01999.pth](https://huggingface.co/HoyerChou/YOTO/blob/main/bidp_pouring_noaug_ckpt01999.pth), [bidp_unscrew_noaug_ckpt03999.pth](https://huggingface.co/HoyerChou/YOTO/blob/main/bidp_unscrew_noaug_ckpt03999.pth), [bidp_uncover_noaug_ckpt03999.pth](https://huggingface.co/HoyerChou/YOTO/blob/main/bidp_uncover_noaug_ckpt03999.pth), [bidp_openbox_noaug_ckpt03999.pth](https://huggingface.co/HoyerChou/YOTO/blob/main/bidp_openbox_noaug_ckpt03999.pth).
* Pretrained withaug BiDP models: [bidp_drawer_withaug_ckpt00499.pth](https://huggingface.co/HoyerChou/YOTO/blob/main/bidp_drawer_withaug_ckpt00499.pth), [bidp_pouring_withaug_ckpt00499.pth](https://huggingface.co/HoyerChou/YOTO/blob/main/bidp_pouring_withaug_ckpt00499.pth), [bidp_unscrew_withaug_ckpt00999.pth](https://huggingface.co/HoyerChou/YOTO/blob/main/bidp_unscrew_withaug_ckpt00999.pth), [bidp_uncover_withaug_ckpt00999.pth](https://huggingface.co/HoyerChou/YOTO/blob/main/bidp_uncover_withaug_ckpt00999.pth), [bidp_openbox_withaug_ckpt00999.pth](https://huggingface.co/HoyerChou/YOTO/blob/main/bidp_openbox_withaug_ckpt00999.pth).
|
ysakhale/stop-sign-automl
|
ysakhale
| 2025-09-22T15:56:13Z | 0 | 0 | null |
[
"image-classification",
"automl",
"autogluon",
"multimodal",
"dataset:ecopus/sign_identification",
"license:mit",
"region:us"
] |
image-classification
| 2025-09-22T15:53:29Z |
---
tags:
- image-classification
- automl
- autogluon
- multimodal
datasets:
- ecopus/sign_identification
metrics:
- accuracy
- f1
license: mit
---
# AutoML Neural Network Model for Stop Sign Classification
## Model Summary
This model was trained using **AutoGluon MultiModalPredictor (v1.4.0)** on the dataset [ecopus/sign_identification](https://huggingface.co/datasets/ecopus/sign_identification).
The task is **binary image classification**, predicting whether a stop sign is present (`1`) or absent (`0`) in the input image.
- **Best Model**: AutoML-selected neural architecture (Hybrid CNN/Transformer backbone via AutoMM)
- **Validation Strategy**: Stratified 80/20 train/test split with early stopping on validation
- **Precision / Recall / F1**: Reported in confusion matrix and classification report
---
## Dataset
- **Source**: [ecopus/sign_identification](https://huggingface.co/datasets/ecopus/sign_identification)
- **Size**: ~X samples (replace with your count)
- **Features**:
- `image`: stop sign or non-stop sign photo
- `label`: binary class (0 = no stop sign, 1 = stop sign present)
---
## Preprocessing
- Images saved as `.png` files from dataset byte arrays
- Train/test split stratified on `label`
- AutoGluon applies default image preprocessing:
- Resizing to fixed resolution
- Normalization
- Default augmentations (random crop/flip/resize)
---
## Results
### Test Metrics (example, update with actual numbers)
- Accuracy: 0.94
- Precision: 0.93
- Recall: 0.94
- F1: 0.94
### Confusion Matrix
Balanced classification with a small number of false positives/false negatives.
---
## Error Analysis
- Misclassifications often occur with:
- Occluded or partially visible stop signs
- Unusual lighting conditions (night, glare)
- Red objects mistaken for stop signs (background clutter)
---
## Intended Use
- Educational use only
- Demonstration of AutoML for neural networks in CMU course 24-679
- Not suitable for deployment in safety-critical systems
---
## Limitations
- Performance may degrade on images outside the dataset distribution
- Sensitive to dataset bias (lighting, camera angle, geography)
- May fail in adversarial conditions (graffiti, damaged signs)
---
## License
- MIT
---
## Hardware/Compute
- Training performed on Google Colab with a **T4 GPU**
- AutoML time budget: 30 minutes (1800s)
---
## AI Usage Disclosure
- This model was built using **AutoGluon AutoML** framework
- Hyperparameter and architecture search were automated
|
adalberto-temp/energy_dpo_V0.2
|
adalberto-temp
| 2025-09-22T15:41:55Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T00:39:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nhatle308/blockassist
|
nhatle308
| 2025-09-22T15:31:54Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"lively snorting bee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-13T09:20:25Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- lively snorting bee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
MrOrdo/Cassini-hacktone-2024
|
MrOrdo
| 2025-09-22T15:20:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2024-11-25T08:12:50Z |
# Cassini Hackathon 2024 - YOLO Object Detection
This project was developed for the Cassini Hackathon 2024, focusing on object detection using YOLO v11 for:
- Military drone detection
- Error detection in images/video
## Project Structure
```
├── train.py # Main training script
├── *.pt files # PyTorch model weights (best/last checkpoints)
├── drone_mil.v1i.yolov11/ # Military drone detection dataset
├── FindError.v4i.yolov11/ # Error detection dataset
├── runs/ # Training results and predictions
├── video/ # Test videos (DJI drone footage)
└── yolo11n_ncnn_model/ # NCNN optimized model
```
## Features
- **Drone Detection**: Military applications using YOLO v11
- **Error Detection**: Automated error identification in visual data
- **Multiple Model Variants**: Different model sizes (nano, small, medium)
- **Video Processing**: Real-time detection on drone footage
## Models
- `yolo11n.pt` - YOLO v11 Nano (fastest)
- `yolo11s.pt` - YOLO v11 Small (balanced)
- Various trained models with different configurations
## Usage
```bash
# Training
python train.py
# Inference
# Use the trained models in runs/detect/ for predictions
```
## Dataset Information
### Military Drone Dataset
- **Source**: drone_mil.v1i.yolov11
- **Classes**: Military drone detection
- **Format**: YOLO format with train/valid/test splits
### Error Detection Dataset
- **Source**: FindError.v4i.yolov11
- **Classes**: Various error types
- **Format**: YOLO format with train/valid/test splits
## Results
Training results and benchmarks are stored in:
- `runs/detect/` - Prediction outputs
- `benchmarks.log` - Performance metrics
## Note
Large files (datasets, model weights, videos) are excluded from this repository due to size constraints. The training script and configuration files are included for reproducibility.
## Hackathon Context
Developed for Cassini Hackathon 2024 - Space Technology and Earth Observation challenges.
|
alecglover/Affine-v1
|
alecglover
| 2025-09-22T15:20:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:2501.12948",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T15:18:00Z |
---
license: mit
library_name: transformers
---
# DeepSeek-R1
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<!-- markdownlint-disable no-duplicate-header -->
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-DeepSeek%20R1-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf"><b>Paper Link</b>👁️</a>
</p>
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
**NOTE: Before running DeepSeek-R1 series models locally, we kindly recommend reviewing the [Usage Recommendation](#usage-recommendations) section.**
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regarding the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
**NOTE: Hugging Face's Transformers has not been directly supported yet.**
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
You can also easily start a service using [SGLang](https://github.com/sgl-project/sglang)
```bash
python3 -m sglang.launch_server --model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --trust-remote-code --tp 2
```
### Usage Recommendations
**We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:**
1. Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
2. **Avoid adding a system prompt; all instructions should be contained within the user prompt.**
3. For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
4. When evaluating model performance, it is recommended to conduct multiple tests and average the results.
Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting "\<think\>\n\n\</think\>") when responding to certain queries, which can adversely affect the model's performance.
**To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with "\<think\>\n" at the beginning of every output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title={DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author={DeepSeek-AI},
year={2025},
eprint={2501.12948},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.12948},
}
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]).
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758554088
|
poolkiltzn
| 2025-09-22T15:16:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T15:15:53Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
diouck/llama3_merged
|
diouck
| 2025-09-22T15:13:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T14:28:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ZezhiShao/BLAST_CKPTS
|
ZezhiShao
| 2025-09-22T15:02:00Z | 0 | 0 | null |
[
"safetensors",
"en",
"dataset:ZezhiShao/BLAST",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T04:27:56Z |
---
license: apache-2.0
datasets:
- ZezhiShao/BLAST
language:
- en
---
Chronos, MOIRAI, and TimeMoE models pretrained on BLAST.
|
nonoJDWAOIDAWKDA/Shiori_reviewed_ft_StyleTTS2
|
nonoJDWAOIDAWKDA
| 2025-09-22T15:00:11Z | 0 | 0 | null |
[
"text-to-speech",
"StyleTTS2",
"speech-synthesis",
"en",
"license:mit",
"region:us"
] |
text-to-speech
| 2025-09-22T14:59:27Z |
---
language: en
tags:
- text-to-speech
- StyleTTS2
- speech-synthesis
license: mit
pipeline_tag: text-to-speech
---
# StyleTTS2 Fine-tuned Model
This model is a fine-tuned version of StyleTTS2, containing all necessary components for inference.
## Model Details
- **Base Model:** StyleTTS2-LibriTTS
- **Architecture:** StyleTTS2
- **Task:** Text-to-Speech
- **Last Checkpoint:** epoch_2nd_00004.pth
## Training Details
- **Total Epochs:** 5
- **Completed Epochs:** 4
- **Total Iterations:** 1225
- **Batch Size:** 2
- **Max Length:** 620
- **Learning Rate:** 0.0001
- **Final Validation Loss:** 0.400027
## Model Components
The repository includes all necessary components for inference:
### Main Model Components:
- bert.pth
- bert_encoder.pth
- predictor.pth
- decoder.pth
- text_encoder.pth
- predictor_encoder.pth
- style_encoder.pth
- diffusion.pth
- text_aligner.pth
- pitch_extractor.pth
- mpd.pth
- msd.pth
- wd.pth
### Utility Components:
- ASR (Automatic Speech Recognition)
- epoch_00080.pth
- config.yml
- models.py
- layers.py
- JDC (F0 Prediction)
- bst.t7
- model.py
- PLBERT
- step_1000000.t7
- config.yml
- util.py
### Additional Files:
- text_utils.py: Text preprocessing utilities
- models.py: Model architecture definitions
- utils.py: Utility functions
- config.yml: Model configuration
- config.json: Detailed configuration and training metrics
## Training Metrics
Training metrics visualization is available in training_metrics.png
## Directory Structure
├── Utils/
│ ├── ASR/
│ ├── JDC/
│ └── PLBERT/
├── model_components/
└── configs/
## Usage Instructions
1. Load the model using the provided config.yml
2. Ensure all utility components (ASR, JDC, PLBERT) are in their respective directories
3. Use text_utils.py for text preprocessing
4. Follow the inference example in the StyleTTS2 documentation
|
opendiffusionai/xlsd32-beta1
|
opendiffusionai
| 2025-09-22T14:57:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-09-22T13:58:28Z |
---
base_model:
- stable-diffusion-v1-5/stable-diffusion-v1-5
---
# SD1.5 model, with SDXL vae grafted on, and then retrained to work properly
Currently only in huggingface/diffusers format.
May generate a "checkpoint" model later
# Creation notes:
dataset: 80k square images
phase 1:
FP32 b32a8, optimi LION, LR 1e-5 const, for only 150 steps
model locked except for following layers:
in, out, up.3, down.0
Note that smaller trainable params lets us use b32 on a 4090 here
phase 2:
FP32, b16a16, optimi LION, initial LR 1e-5, linear over 6 epochs (1920 effective steps)
picked step 1800
phase 2 took around 15 hours, so total time maybe 16 hours
## Why 2-phase
In theory, the phase 1 wasnt strictly neccessary. However, in early retraining, it would most likely
hit very large changes to the core model, that arent strictly neccessary for vae retraining.
So I picked minimal disruption
|
senga-ml/dnote-header
|
senga-ml
| 2025-09-22T14:49:37Z | 239 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-06-04T08:55:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nnilayy/dreamer_window_512-binary-arousal-Kfold-3-stride_512
|
nnilayy
| 2025-09-22T14:46:55Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-09-22T14:46:51Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
bowo255/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_barky_ibis
|
bowo255
| 2025-09-22T14:43:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am powerful_barky_ibis",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T07:37:12Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am powerful_barky_ibis
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aochongoliverli/Qwen2.5-3B-math8k-distill-AM-Distill-Qwen-32B-16k-5epochs-2e-5lr-step400
|
aochongoliverli
| 2025-09-22T14:42:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T14:39:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
davidguzmanr/mms-tts-yor
|
davidguzmanr
| 2025-09-22T14:38:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T14:38:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
saracandu/stldec_random_128_umap
|
saracandu
| 2025-09-22T14:10:14Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"stldec128umap",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-09-12T12:38:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jonc/my-embedding-gemma
|
jonc
| 2025-09-22T13:57:01Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"gemma3_text",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:3",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:google/embeddinggemma-300m",
"base_model:finetune:google/embeddinggemma-300m",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-22T13:56:38Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:3
- loss:MultipleNegativesRankingLoss
base_model: google/embeddinggemma-300m
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on google/embeddinggemma-300m
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [google/embeddinggemma-300m](https://huggingface.co/google/embeddinggemma-300m) <!-- at revision c5cfa06e5e282a820e85d57f7fb053207494f41d -->
- **Maximum Sequence Length:** 2048 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 2048, 'do_lower_case': False, 'architecture': 'Gemma3TextModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 3072, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Dense({'in_features': 3072, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(4): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jonc/my-embedding-gemma")
# Run inference
queries = [
"Which planet is known as the Red Planet?",
]
documents = [
"Venus is often called Earth's twin because of its similar size and proximity.",
'Mars, known for its reddish appearance, is often referred to as the Red Planet.',
'Saturn, famous for its rings, is sometimes mistaken for the Red Planet.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 768] [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.2880, 0.6381, 0.4942]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 3 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 3 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 12.0 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 15.33 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 12.67 tokens</li><li>max: 14 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:--------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------|
| <code>How do I open a NISA account?</code> | <code>What is the procedure for starting a new tax-free investment account?</code> | <code>I want to check the balance of my regular savings account.</code> |
| <code>Are there fees for making an early repayment on a home loan?</code> | <code>If I pay back my house loan early, will there be any costs?</code> | <code>What is the management fee for this investment trust?</code> |
| <code>What is the coverage for medical insurance?</code> | <code>Tell me about the benefits of the health insurance plan.</code> | <code>What is the cancellation policy for my life insurance?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 1
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `prompts`: task: sentence similarity | query:
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 1
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: task: sentence similarity | query:
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:-----:|:----:|:-------------:|
| 1.0 | 3 | 0.0483 |
| 2.0 | 6 | 0.0 |
| 3.0 | 9 | 0.0 |
| 4.0 | 12 | 0.0 |
| 5.0 | 15 | 0.0 |
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.1
- Transformers: 4.57.0.dev0
- PyTorch: 2.8.0+cu126
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
johnnycwatt/MiniStoryGPT
|
johnnycwatt
| 2025-09-22T13:46:15Z | 0 | 0 | null |
[
"dataset:roneneldan/TinyStories",
"arxiv:1706.03762",
"arxiv:2305.07759",
"license:mit",
"region:us"
] | null | 2025-09-22T13:20:47Z |
---
license: mit
datasets:
- roneneldan/TinyStories
---
# MiniStoryGPT
**MiniStoryGPT** is a compact, educational GPT-style language model built in PyTorch to demonstrate training transformer architectures from scratch. It is trained on the [TinyStories dataset](https://huggingface.co/datasets/roneneldan/TinyStories) to generate short, child-friendly narratives. The model follows principles from "Attention is All You Need" and draws inspiration from Andrej Karpathy’s nanoGPT and Zero to Hero materials.
**Purpose**: This model is designed for educational and experimentation purposes, offering hands-on experience with building, training, and sampling from GPT-like models. It is **not** intended for production use.
## Model Details
- **Architecture**: GPT-style transformer with 2 layers, 8 attention heads, and 768 embedding dimensions.
- **Parameters**: ~30 million
- **Vocabulary Size**: 10,000 tokens (remapped from GPT-2 tokenizer for efficiency)
- **Training Data**: TinyStories dataset (preprocessed into `train.bin` and `val.bin`)
- **Training**: ~50,000 iterations with a batch size of 32, context length of 512, and AdamW optimizer (learning rate 3e-4). Achieved a training loss of ~1.55.
- **Checkpoint**: `MiniStoryGPT-30M.pth` (367MB), saved at iteration 20,000.
- **Hardware**: Trained on a single GPU (CUDA-compatible).
## Installation
To use MiniStoryGPT, install the required dependencies:
```bash
pip install torch tiktoken
```
## Usage
Download the model and mappings from this Hugging Face repository:
```python
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="johnnycwatt/MiniStoryGPT", filename="MiniStoryGPT-30M.pth", local_dir=".")
hf_hub_download(repo_id="johnnycwatt/MiniStoryGPT", filename="old_to_new.pt", local_dir=".")
hf_hub_download(repo_id="johnnycwatt/MiniStoryGPT", filename="new_to_old.pt", local_dir=".")
```
Run the provided `sampler.py` to generate stories:
```bash
python sampler.py
```
Example code to load and generate:
```python
import torch
import tiktoken
# Load model and mappings
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = GPTLanguageModel().to(device)
model.load_state_dict(torch.load("MiniStoryGPT-30M.pth", map_location=device))
old_to_new = torch.load("old_to_new.pt", map_location=device)
new_to_old = torch.load("new_to_old.pt", map_location=device)
enc = tiktoken.get_encoding("gpt2")
# Remap and generate
prompt = "Once upon a time,"
original_context = enc.encode(prompt)
remapped_context = [old_to_new.get(token, 0) for token in original_context]
context = torch.tensor([remapped_context], dtype=torch.long, device=device)
with torch.no_grad():
output = model.generate(context, max_new_tokens=300)
story = enc.decode([new_to_old.get(new_id, 0) for new_id in output[0].tolist()])
print(story)
```
The model requires `old_to_new.pt` and `new_to_old.pt` for token remapping due to the reduced vocabulary. See the [GitHub repository](https://github.com/johnnycwatt/MiniStoryGPT) for the full training and sampling code.
## Training Details
- **Dataset**: TinyStories, preprocessed into tokenized binaries (`train.bin`, `val.bin`) with a 10K-token vocabulary.
- **Preprocessing**: Uses `tiktoken` (GPT-2 tokenizer) with custom remapping to reduce vocab size.
- **Hyperparameters**:
- Batch size: 32
- Context length: 512
- Learning rate: 3e-4
- Dropout: 0.2
- Positional embeddings: Learned (not sinusoidal)
- **Loss**: ~1.55 on training set at 50,000 iterations (validation loss ~1.60).
To reproduce training, run `prepare_data.py` and `train.py` from the GitHub repo.
## Limitations
- **Educational Focus**: MiniStoryGPT is for learning, not optimized for production-grade performance.
- **Output Quality**: Generates simple, child-friendly stories but may produce incoherent or repetitive text due to its small size and limited training.
- **Vocabulary**: Uses a reduced 10K-token vocab, which may miss some nuances of the full GPT-2 tokenizer.
- **Compute**: Trained on a single GPU; scaling to larger datasets or models requires more resources.
## License
Released under the MIT License. Feel free to use, modify, and distribute for research and educational purposes.
## References
- Vaswani et al., "Attention is All You Need" (https://arxiv.org/abs/1706.03762)
- Karpathy’s nanoGPT (https://github.com/karpathy/nanoGPT)
- Karpathy’s Zero to Hero Course (YouTube)
- TinyStories Dataset (https://huggingface.co/datasets/roneneldan/TinyStories)
- TinyStories Paper (https://arxiv.org/abs/2305.07759)
## Contact
For questions or contributions, open an issue on the [GitHub repository](https://github.com/johnnycwatt/MiniStoryGPT) or contact [email protected]
|
Qwen/Qwen-Image-Edit-2509
|
Qwen
| 2025-09-22T13:37:12Z | 0 | 24 |
diffusers
|
[
"diffusers",
"safetensors",
"image-to-image",
"en",
"zh",
"arxiv:2508.02324",
"license:apache-2.0",
"diffusers:QwenImageEditPlusPipeline",
"region:us"
] |
image-to-image
| 2025-09-22T13:09:40Z |
---
license: apache-2.0
language:
- en
- zh
library_name: diffusers
pipeline_tag: image-to-image
---
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/qwen_image_edit_logo.png" width="400"/>
<p>
<p align="center">
💜 <a href="https://chat.qwen.ai/"><b>Qwen Chat</b></a>   |   🤗 <a href="https://huggingface.co/Qwen/Qwen-Image-Edit-2509">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/models/Qwen/Qwen-Image-Edit-2509">ModelScope</a>   |    📑 <a href="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/Qwen_Image.pdf">Tech Report</a>    |    📑 <a href="https://qwenlm.github.io/blog/qwen-image-edit/">Blog</a>   
<br>
🖥️ <a href="https://huggingface.co/spaces/Qwen/Qwen-Image-Edit">Demo</a>   |   💬 <a href="https://github.com/QwenLM/Qwen-Image/blob/main/assets/wechat.png">WeChat (微信)</a>   |   🫨 <a href="https://discord.gg/CV4E9rpNSD">Discord</a>  |    <a href="https://github.com/QwenLM/Qwen-Image">Github</a>  
</p>
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/Qwen-Image/edit2509/edit2509_top.jpg" width="1600"/>
<p>
# Introduction
This September, we are pleased to introduce Qwen-Image-Edit-2509, the monthly iteration of Qwen-Image-Edit. To experience the latest model, please visit [Qwen Chat](https://qwen.ai) and select the "Image Editing" feature.
Compared with Qwen-Image-Edit released in August, the main improvements of Qwen-Image-Edit-2509 include:
* **Multi-image Editing Support**: For multi-image inputs, Qwen-Image-Edit-2509 builds upon the Qwen-Image-Edit architecture and is further trained via image concatenation to enable multi-image editing. It supports various combinations such as "person + person," "person + product," and "person + scene." Optimal performance is currently achieved with 1 to 3 input images.
* **Enhanced Single-image Consistency**: For single-image inputs, Qwen-Image-Edit-2509 significantly improves editing consistency, specifically in the following areas:
- **Improved Person Editing Consistency**: Better preservation of facial identity, supporting various portrait styles and pose transformations;
- **Improved Product Editing Consistency**: Better preservation of product identity, supporting product poster editing;
- **Improved Text Editing Consistency**: In addition to modifying text content, it also supports editing text fonts, colors, and materials;
* **Native Support for ControlNet**: Including depth maps, edge maps, keypoint maps, and more.
## Quick Start
Install the latest version of diffusers
```
pip install git+https://github.com/huggingface/diffusers
```
The following contains a code snippet illustrating how to use `Qwen-Image-Edit-2509`:
```python
import os
import torch
from PIL import Image
from diffusers import QwenImageEditPlusPipeline
pipeline = QwenImageEditPlusPipeline.from_pretrained("Qwen/Qwen-Image-Edit-2509", torch_dtype=torch.bfloat16)
print("pipeline loaded")
pipeline.to('cuda')
pipeline.set_progress_bar_config(disable=None)
image1 = Image.open("input1.png")
image2 = Image.open("input2.png")
prompt = "The magician bear is on the left, the alchemist bear is on the right, facing each other in the central park square."
inputs = {
"image": [image1, image2],
"prompt": prompt,
"generator": torch.manual_seed(0),
"true_cfg_scale": 4.0,
"negative_prompt": " ",
"num_inference_steps": 40,
"guidance_scale": 1.0,
"num_images_per_prompt": 1,
}
with torch.inference_mode():
output = pipeline(**inputs)
output_image = output.images[0]
output_image.save("output_image_edit_plus.png")
print("image saved at", os.path.abspath("output_image_edit_plus.png"))
```
## Showcase
**The primary update in Qwen-Image-Edit-2509 is support for multi-image inputs.**
Let’s first look at a "person + person" example:

Here is a "person + scene" example:

Below is a "person + object" example:

In fact, multi-image input also supports commonly used ControlNet keypoint maps—for example, changing a person’s pose:

Similarly, the following examples demonstrate results using three input images:



---
**Another major update in Qwen-Image-Edit-2509 is enhanced consistency.**
First, regarding person consistency, Qwen-Image-Edit-2509 shows significant improvement over Qwen-Image-Edit. Below are examples generating various portrait styles:

For instance, changing a person’s pose while maintaining excellent identity consistency:

Leveraging this improvement along with Qwen-Image’s unique text rendering capability, we find that Qwen-Image-Edit-2509 excels at creating meme images:

Of course, even with longer text, Qwen-Image-Edit-2509 can still render it while preserving the person’s identity:

Person consistency is also evident in old photo restoration. Below are two examples:


Naturally, besides real people, generating cartoon characters and cultural creations is also possible:

Second, Qwen-Image-Edit-2509 specifically enhances product consistency. We find that the model can naturally generate product posters from plain-background product images:

Or even simple logos:

Third, Qwen-Image-Edit-2509 specifically enhances text consistency and supports editing font type, font color, and font material:



Moreover, the ability for precise text editing has been significantly enhanced:


It is worth noting that text editing can often be seamlessly integrated with image editing—for example, in this poster editing case:

---
**The final update in Qwen-Image-Edit-2509 is native support for commonly used ControlNet image conditions, such as keypoint control and sketches:**



## License Agreement
Qwen-Image is licensed under Apache 2.0.
## Citation
We kindly encourage citation of our work if you find it useful.
```bibtex
@misc{wu2025qwenimagetechnicalreport,
title={Qwen-Image Technical Report},
author={Chenfei Wu and Jiahao Li and Jingren Zhou and Junyang Lin and Kaiyuan Gao and Kun Yan and Sheng-ming Yin and Shuai Bai and Xiao Xu and Yilei Chen and Yuxiang Chen and Zecheng Tang and Zekai Zhang and Zhengyi Wang and An Yang and Bowen Yu and Chen Cheng and Dayiheng Liu and Deqing Li and Hang Zhang and Hao Meng and Hu Wei and Jingyuan Ni and Kai Chen and Kuan Cao and Liang Peng and Lin Qu and Minggang Wu and Peng Wang and Shuting Yu and Tingkun Wen and Wensen Feng and Xiaoxiao Xu and Yi Wang and Yichang Zhang and Yongqiang Zhu and Yujia Wu and Yuxuan Cai and Zenan Liu},
year={2025},
eprint={2508.02324},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2508.02324},
}
```
|
BlinkDL/rwkv7-g1
|
BlinkDL
| 2025-09-22T13:23:42Z | 0 | 119 | null |
[
"pytorch",
"text-generation",
"causal-lm",
"rwkv",
"en",
"zh",
"fr",
"es",
"de",
"pt",
"ru",
"it",
"ja",
"ko",
"vi",
"ar",
"dataset:HuggingFaceFW/fineweb-edu",
"dataset:mlfoundations/dclm-baseline-1.0",
"dataset:cerebras/SlimPajama-627B",
"dataset:EleutherAI/pile",
"dataset:bigcode/starcoderdata",
"dataset:oscar-corpus/OSCAR-2301",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-03-07T17:05:50Z |
---
language:
- en
- zh
- fr
- es
- de
- pt
- ru
- it
- ja
- ko
- vi
- ar
tags:
- pytorch
- text-generation
- causal-lm
- rwkv
license: apache-2.0
datasets:
- HuggingFaceFW/fineweb-edu
- mlfoundations/dclm-baseline-1.0
- cerebras/SlimPajama-627B
- EleutherAI/pile
- bigcode/starcoderdata
- oscar-corpus/OSCAR-2301
---
# RWKV7-G1 "GooseOne" pure RNN reasoning model
More info & Gradio demo: https://rwkv.com/
For developers: https://github.com/BlinkDL/RWKV-LM
Use rwkv pip package 0.8.29+ for RWKV-7 inference: https://pypi.org/project/rwkv/
**There should not be any space at the end of your input (so strip it) or you will upset the tokenizer and see non-English reponse.**
Please use **latest G1a models** if available (better at everything).
Chat prompt (better replace all \n\n in your_prompt to \n):
```
User: stuff
Assistant: stuff
User: your_prompt
Assistant:
```
Think prompt:
```
User: your_prompt
Assistant: <think
```
Think prompt, alternative style output, valid for 20250922 and newer models (note there is a space before "think", and can use "think a bit"/"think a lot" for shorter/longer think):
```
User: your_prompt think
Assistant: <think
```
Fake think prompt:
```
User: your_prompt
Assistant: <think>
</think
```
|
exzort/VineBot-checkpoint-2530
|
exzort
| 2025-09-22T13:12:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"lora",
"sft",
"transformers",
"trl",
"text-generation",
"conversational",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] |
text-generation
| 2025-09-22T13:12:15Z |
---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0
- lora
- sft
- transformers
- trl
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
Kelanelirum/Qwen3-0.6B-Gensyn-Swarm-toothy_stalking_starfish
|
Kelanelirum
| 2025-09-22T13:09:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am toothy_stalking_starfish",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T13:08:58Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am toothy_stalking_starfish
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pepijn223/pi05_base_fp32
|
pepijn223
| 2025-09-22T13:07:09Z | 178 | 1 | null |
[
"safetensors",
"region:us"
] | null | 2025-09-09T14:55:33Z |
# π₀.₅ - Base
This is a PyTorch version of the PI0.5 `pi05_base model`, converted from the original JAX/Flax implementation.
## Model Details
- **Architecture**: PI0.5 (Vision-Language-Action model with discrete state input)
- **Model Type**: PI0.5
- **Domain**: Base model (general purpose)
- **Precision**: 32-bit floating point (fp32)
- **Vision Model**: PaliGemma (gemma_2b)
- **Action Expert**: gemma_300m
## Key Features
- **Discrete State Input**: Uses discrete language tokens for state representation
- **Flow Matching**: Utilizes adaRMSNorm for timestep injection in action expert
- **Enhanced Action Modeling**: Improved action prediction with flow matching approach
## Conversion Details
This model was converted from JAX to PyTorch using the OpenPI conversion script:
```bash
python examples/convert_jax_model_to_pytorch.py \
--checkpoint_dir /pi05_base \
--config_name pi05_base \
--output_path /pi05_base/pytorch/fp32/ \
--precision float32
```
## Usage
```python
from openpi.models_pytorch.pi0_pytorch import PI0Pytorch
import torch
# Load the model
model = PI0Pytorch.from_pretrained("pepijn223/pi05_base_fp32")
# The model expects inputs in the format:
# - images: torch.Tensor of shape [batch, height, width, channels]
# - text: tokenized text prompts
# - proprioceptive_state: robot state information (if applicable)
```
## Model Architecture
The model consists of:
1. **Vision Encoder**: PaliGemma-based vision processing
2. **Language Encoder**: Text prompt understanding
3. **Action Expert**: Specialized network for action prediction
4. **Integration Layer**: Combines multimodal information for action output
## Training Data
This model was trained on robotics datasets appropriate for its domain:
- **DROID models**: Trained on diverse robot manipulation data
- **LIBERO models**: Trained on diverse tabletop manipulation scenarios
- **Base models**: Trained on general robotics datasets
## Limitations
- Model performance depends on similarity between deployment and training environments
- May require domain-specific fine-tuning for optimal performance
- Action space must match the trained action dimension (32)
## Citation
If you use this model, please cite the original OpenPI work:
```bibtex
@article{openpi2024,
title={Open-World Robotic Manipulation with Vision-Language-Action Models},
author={Physical Intelligence},
year={2024},
url={https://github.com/Physical-Intelligence/openpi}
}
```
## Original Repository
[OpenPI GitHub Repository](https://github.com/Physical-Intelligence/openpi)
## License
This model follows the same license as the original OpenPI repository.
|
Quainisnorilon/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-snappy_galloping_squid
|
Quainisnorilon
| 2025-09-22T13:07:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am snappy_galloping_squid",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T13:06:56Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am snappy_galloping_squid
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Jaw00/donut-v6
|
Jaw00
| 2025-09-22T12:17:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-22T10:16:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Desalegnn/Desu-snowflake-arctic-embed-l-v2.0-finetuned-amharic-45k
|
Desalegnn
| 2025-09-22T12:15:57Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:40237",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:Desalegnn/amharic-passage-retrieval-dataset",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-l-v2.0",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l-v2.0",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-22T12:15:09Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:40237
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l-v2.0
widget:
- source_sentence: የሞዴል ጥቃቅንና አነስተኛ ኢንተርፕራይዞች ኤግዚቢሽንና ባዛር የ4 ሚሊዮን ብር ሽያጭና የገበያ ትስስር
እንደሚፈጠር ተገለጸ
sentences:
- አዲስ አበባ ፣ ነሃሴ 22 ፣ 2012 (ኤፍ ቢ ሲ) ሰኔ 16 ቀን 2010 ዓ.ም በአዲስ አበባ መስቀል አደባባይ ለጠቅላይ ሚኒስትር
ዐቢይ አሕመድ በተካሄደ የድጋፍ ሰልፍ ላይ ቦምብ በመወርወር የሽብር ወንጀል የተከሰሱ አምስት ተከሳሾች የጥፋተኝነት ፍርድ ተፈረደባቸው።ተከሳሾቹ
ጌቱ ቶሎሳ፣ ብርሃኑ ጃፋር፣ ጥላሁን ጌታቸው፣ ደሳለኝ ተስፋዬ እና ባህሩ ቶላ ሲሆኑ የጥፋተኝነት ፍርዱን የፌደራሉ ከፍተኛ ፍርድ
ቤት 1ኛ የወንጀል ችሎት ነው ያስተላለፈው።የዐቃቤ ህግ ክስ እንደሚያመላክተው ተከሳሾቹ ወንጀሉን የፈጸሙት ሰኔ 16 ቀን 2010
ዓ.ም በአዲስ አባባ መስቀል አደባባይ ከረፋዱ አራት ሰአት ላይ በ40 ሜትር ርቀት አካባቢ ለጠቅላይ ሚኒስትር ዐቢይ አሕመድ
በተደረገው የድጋፍ ሰልፍ ላይ ቦንብ በመወርወር ነው።ተከሳሾቹ በ1996 ዓ.ም የወጣውን የኢፌዴሪ የወንጀል ህግ አንቀጽ 32/1ሀ
እንዲሁም አንቀጽ 38 እና የፀረ ሽብርተኝነት አዋጅ ቁጥር 652/2001 አንቀጽ 3 ስር የተመለከተውን በመተላለፍ፤ በሃገሪቱ
ያለውን ለውጥ ተከትሎ በጠቅላይ ሚኒስትር ዐቢይ የሚመራ መንግስት መኖር የለበትም በሚል የራሳቸውን አላማ ለማራመድ በማሰብ መንቀሳቀሳቸውን
ዐቃቤ ህግ በክሱ አመላክቷል።በዚህም ከ1ኛ እስከ 4ኛ ያሉ ተከሳሾች ከሱሉሉታ ከተማ መነሻቸውን በማድረግ በስልክ በመደዋወልና
በአካል በመገናኘት በድጋፍ ሰልፉ ላይ እንዴት ቦምብ መወርወር እንዳለባቸው ሲዘጋጁ ቆይተዋልም ነው ያለው ዐቃቤ ህግ፡፡በዚህ
መልኩ በ1ኛ ተከሳሽ ቤት ቡራዩ በማደር 2ኛ ተከሳሽ በሚያሽከረክረው ተሽከርካሪ 2ኛ ተከሳሽ ያዘጋጀውን ኤፍ1 ቦምብ በመያዝ
ከ3 እስከ 5ኛ ያሉ ተከሳሾች ጋር ከፒያሳ ወደ ቴድሮስ አደባባይ በመምጣትና የድጋፍ ቲሸርት ልብስ ገዝተው በመልበስ ተመሳስለው
መግባታቸው ተጠቅሷል።በድጋፍ ሰልፉ ላይ ጠቅላይ ሚኒስትር ዐቢይ ንግግር ካደረጉ በኋላ ተከሳሾቹ በ40 ሜትር ርቀት ላይ ቦምብ
የወረወሩ ሲሆን በዚህም የሁለት ሰዎች ህይወት ሲያልፍ ከ163 በላይ ሰዎች ላይ ደግሞ ከከባድ እስከ ቀላል የአካል ጉዳት እንደደረሰባቸውም
ዐቃቤ ህግ አስረድቷል፡፡የዐቃቤ ህግን የሰነድና የሰው ምስክር እንዲሁም የተከሳሾችን መከላከያ የመረመረው ፍርድ ቤቱ ተከሳሾቹን
በተከሰሱበት ወንጀል ጥፋተኛ ብሏቸዋል።በተከሳሾቹ ላይ የቅጣት ውሳኔ ለመስጠትም ለጥቅምት 17 ቀን 2013 ዓ.ም ተለዋጭ ቀጠሮ
ሰጥቷል።እስከ ጥቅምት 17 ድረስ ግን የቅጣት ማቅለያዎችን ማቅረብ እንደሚቻል ትዕዛዝ ሰጥቷል።በታሪክ አዱኛ
- 'አዲሱ ገረመው አዲስ አበባ፡- የ2013 በጀት ዓመት የ4 ሚሊዮን ብር ሽያጭና የገበያ ትስስር እንደሚፈጥር የፌዴራል የከተሞች
የስራ ዕድል ፈጠራና የምግብ ዋስትና ኤጀንሲ አስታወቀ። ከተሳታፊዎች ውስጥ 50 በመቶዎቹ ሴቶች መሆናቸው ተጠቆመ ። ኤጀንሲው
ለአዲስ ዘመን
ጋዜጣ በላከው መግለጫ
እንዳስታወቀው፤ በ2013 በጀት
አመት አንደኛው ዙር
የሞዴል ጥቃቅንና አነስተኛ
ኢንተርፕራይዞች ሀገር አቀፍ
ኤግዚቢሽንና ባዛር ‹‹ዘላቂነት
ያለው የገበያ ትስስር
ለስራ ዕድል ፈጠራና
ለኢንተርፕራይዞች ልማት መሰረት
ነው ›› በሚል
መሪ ቃል ከታህሳስ
22 እስከ ታህሳስ 28 ቀን
2013 ዓ.ም በጀሞ አንድ አደባባይ ትራፊክ መብራት ፊትለፊት ለሰባት ተከታታይ ቀናት የሚካሄድ ይሆናል። የ4 ሚሊዮን ብር ሽያጭና
የገበያ ትስስር እንዲሚፈጥርም ይጠበቃል። በኤግዚቢሽንና ባዛሩ ላይ ከሁሉም ክልሎችና ከተሞች የተውጣጡ 202 የጥቃቅን እና አነስተኛ
ኢንተርፕራይዞች 10 አነስተኛና መካከለኛ ኢንዱስትሪዎች የሚሳተፉ ሲሆን፤ ሴቶች 50 በመቶ እና አካል ጉዳተኛ ሦስት በመቶ በማሳተፍ
ምርትና አገልግሎታቸው ከ20ሺ በላይ በሚሆን ተጠቃሚ የህብረተሰብ ክፍል እንዲጎበኝ ይደረጋል ብሏል ። ባዛሩ ከተለያዩ ክልሎችና
አካባቢዎች የተሰባሰቡና በልዩ ልዩ ዘርፎች የተሰማሩ ብቁና ተወዳዳሪ ኢንተርፕራይዞችንና አንቀሳቃሾችን የሚያሳትፍ ሲሆን፤ በአንድ
ማዕከል በማገናኘት በሚፈጠረው ትውውቅና የልምድ ልውውጥ በመካከላቸው ጤናማ የውድድር ስሜት ለማቀጣጠል እንደሚያስችልም “ኤጀንሲው
አመልክቷል ። ባህላዊና ዘመናዊ የጨርቃጨርቅና
አልባሳት ምርት ውጤቶች፣
ባህላዊና ዘመናዊ የቆዳ
አልባሳትና የቆዳ ምርት
ውጤቶች፣ ባህላዊ የዕደ-ጥበባትና
ቅርጻ-ቅርጽ ሥራዎችና
ውጤቶች፣ የብረታብረት፣ የእንጨት
ሥራና የኢንጅነሪንግ ስራዎችና
ውጤቶች፣ የአግሮ-ፕሮሰሲንግ
ምርቶች እና የከተማ
ግብርና ውጤቶች፣ የቴክኖሎጂ
ውጤቶችና የፈጠራ ስራዎች፣
ፈሳሽ ሳሙና፣አልኮል፣ሳኒታይዘር፣
የአፍና አፍንጫ መሸፈኛ
ጭንብል/ማስኮች/፣
እና ሌሎችም ምርቶች
በኤግዚቢሽንና ባዛሩ እንደሚቀርቡ
አስታውቋል። የአዲስ አበባ ነጋዴ ሴቶች ማህበር፣ የሴቶች ኢንተርፕርነርሺፕ ልማት ፕሮግራም፣ ኢንተርፕርነርሺፕ ልማት ማዕከል፣
ፋሽን ዲዛይን አሶሴሽን፣ የሴቶች ራስ አገዝ ድርጅት፣ የባህልና ቱሪዝም ሚኒስቴር በዕደ ጥበብ ዘርፍ የተሰማሩ ኢንተርፕራይዞችና
ሌሎችም ተሳታፊ ኢንተርፕራይዞች እንደሚሆኑ ጠቁሟል። ሁነቱ የተሞክሮ ልውውጥና
የንግድ ልማት ግንዛቤ
ከማዳበሩም ባሻገር፤ ኢንተርፕራይዞች
ከተጠቃሚው ህብረተሰብ ጋር
በሚያደርጉት ግንኙነት ዘላቂ
የገበያ ትስስር ለመፍጠር
የሚያስችል ምቹ አጋጣሚ
ይሆንላቸዋል። ምርቶቻቸውንና አገልግሎታቸውን
ለተጠቃሚዎች በቀጥታ በመሸጥም
ተጠቃሚ እንደሚሆኑም እጀንሲው
አስታውቋል ።አዲስ ዘመን ታህሳስ 22/2013'
- የአሜሪካው ሜሪየም ዌብስተር መዝገበ ቃላት እንደ ኦክስፎርድ መዝገበ ቃላት ሁሉ ታዋቂና ዓለም አቀፍ ተቀባይነት ያለው መዝገበ
ቃላት ነው።አንዲት ወጣት ጥቁር አሜሪካዊት ታዲያ ለዚህ መዝገበ ቃላት አሳታሚ በጻፈቸው ደብዳቤ ምክንያት መዝገበ ቃላቱ ዘረኝነት
ወይም (racism) ለሚለው የእንግሊዝኛ ቃል የትርጉም ፍቺ ማሻሻያ ለማድረግ ወስኗል።
- source_sentence: የደኢሕዴን ከፍተኛ አመራሮች በሐዋሳ እየመከሩ ነው
sentences:
- 'የሁለት ዞኖች ከፍተኛ አመራሮች ታግደዋል የደቡብ ኢትዮጵያ ሕዝቦች ዴሞክራሲያዊ ንቅናቄ (ደኢሕዴን) ከፍተኛ አመራሮች ከሐሙስ
ሐምሌ 18 እስከ 22 ቀን 2011 ዓ.ም. ድረስ በሐዋሳ እየመከሩ ነው፡፡ ከፍተኛ አመራሮቹ በክልሉ ውስጥ በተከሰተው ወቅታዊ
ችግርና በአገራዊ ጉዳዮች ላይ እንደሚወያዩ፣ በተለይ በድርጅቱ ህልውና ላይ እንደሚያተኩሩም ታውቋል፡፡ የደኢሕዴን ሊቀመንበር
ወ/ሮ ሙፈሪያት ካሚል በምክክሩ ላይ ባደረጉት ንግግር፣ በአገር ደረጃና በደቡብ ክልል የፖለቲካና የፀጥታ ጉዳዮች ላይ ወጥ አቋም
ያለው አመራር አስፈላጊነትን አውስተዋል፡፡ ከዚህ አንፃርም አመራሩ ራሱን በመፈተሽ ለለውጥ ዝግጁ መሆን እንዳለበት አስታውቀዋል፡፡
እንደ ወ/ሮ ሙፈሪያት ማብራሪያ የደኢሕዴን ህልውና መረጋገጥ የሚችለው፣ አመራሩ ከመቼውም ጊዜ በላይ መንቀሳቀስ ሲችል ብቻ እንደሆነ
ነው፡፡ አመራሩ ምንም ነገር እንደማይመጣ በመኩራራት ወይም በወቅታዊ ሁኔታዎች በመሥጋት የሚቀጥል ከሆነ ውጤት እንደማይኖር፣
በወቅቱ ተጨባጭ ሁኔታ ላይ በዝርዝር በመወያየት የድርጅቱ ህልውናን ማስቀጠል ላይ ትኩረት መስጠት እንደሚገባ አስረድተዋል፡፡
ይህ በዚህ እንዳለ ደኢሕዴን የሲዳማ ዞን፣ የሐዋሳ ከተማና የሃድያ ዞን ከፍተኛ አመራሮችን ማገዱንና ለወላይታና ለካፋ ዞኖች
አመራሮች ደግሞ ማስጠንቀቂያ መስጠቱን አስታውቋል፡፡ ከክልልነት ጥያቄ ጋር በተያያዘ በተለይ በሲዳማ ዞን ወረዳዎችና በሐዋሳ
ከተማ በተፈጸሙ ጥቃቶች የበርካቶች ሕይወት ማለፉን፣ የበርካቶች ቤት ንብረት መውደሙን ተከትሎ የደቡብ ክልል በፌዴራል መንግሥት
የፀጥታ አካላት ኮማንድ ፖስት ሥር እንዲተዳደሩ መወሰኑ የሚታወስ ሲሆን፣ በዚህም ምክንያት የደኢሕዴን ሥራ አስፈጻሚ ኮሚቴ በሐዋሳ
ከተማ ባደረገው ስብሰባ የአመራሮቹን የዕግድ ውሳኔ አሳልፏል፡፡ በዚህ ስብሰባው የክልሉን የፀጥታ ሁኔታ እንደገመገመ የገለጸው
የሥራ አስፈጻሚ ኮሚቴው፣ በተፈጠረ የፀጥታ ችግሮች ሳቢያ የሲዳማ ዞንና የሐዋሳ ከተማን፣ እንዲሁም የሃዲያ ዞን ‹‹የፊት አመራሮች››
እንዳገደ አስታውቋል፡፡ በተያያዘም በወላይታና በካፋ ዞኖች እየታዩ ያሉ ሁኔታዎች የሕግ ተጠያቂነትን የሚያስከትሉ ስለሆኑ፣ አመራሩ
የሕዝቡን ደኅንነት ለማስጠበቅ እንዲሠራ ሲል አስጠንቅቋል፡፡ በዚህም ሳቢያ የሲዳማ ዞን አስተዳዳሪ አቶ ቃሬ ጫዊቻና የሐዋሳ
ከተማ ከንቲባ አቶ ሱካሬ ሹዳ መታገዳቸውን ለማወቅ ተችሏል፡፡ የሥራ አስፈጻሚ ኮሚቴው በሐዋሳና በአካባቢው ሐምሌ 11 ቀን 2011
ዓ.ም. ክልልነትን እናውጃለን በሚል በተፈጸመ ጥቃት የተጎዱ ቤተሰቦችን መልሶ ለማቋቋም እንደሚሠራ በማስታወቅ፣ የጥፋቱ ተሳታፊዎችም
ሆኑ አስተባባሪዎች የሕግ ተጠያቂ እንዲሆኑ እሠራለሁ ብሏል፡፡ አሁን ለተከሰተው ጥፋትም ሆነ እየተስተዋለ በሚገኘው ሥርዓተ አልበኝነት
ውስጥ የአመራሩ ሚና ከፍተኛ መሆኑን ያመነው የሥራ አስፈጻሚ ኮሚቴው፣ ይኼንን ለማረም ከሥራ አስፈጻሚ እስከ ታችኛው የአመራር
ሥርዓት ድረስ ፈትሾ ዕርምጃ እንደሚወስድ ቃል ገብቷል፡፡ '
- 'አዲስ አበባ፣ ጥር 2፣ 2012 (ኤፍ.ቢ.ሲ) በፓኪስታን ደቡብ ምእራብ ኩዌታ ከተማ በመስጊድ ላይ በተፈፀመ የቦብም ጥቃት
የሞቱ ሰዎች ቁጥር 15 መድረሱን ፖሊስ አስታወቀ።በአርብ ፀሎት ላይ በነበሩ ሰዎች ላይ በተፈፀመው የቦምብ ጥቃቱ ከሞቱት ሰዎች
በተጨማሪም ከ20 በላይ ሰዎች ላይ የተለያየ መጠን ያለው ጉዳት መድረሱንም ነው የገለፀው።በመስጊድ ላይ ለተፈፀመው ጥቃትም በአካባቢው
የሚንቀሳቀሰው የአሸባሪው ኢስላሚክ ስቴት (አይ.ኤስ) ቡድን ኃላፊነት መውሰዱ ተነገሯል።በሽብር ጥቃቱ በአፍጋኒስታን የሚንቀሳቀሰው
የታሊባን ቡድን አመራሮች ተገድለዋል ቢባልም፤ ታሊባን ግን አመራሮቼ ላይ ጉዳት አልደረሰም ሲል አስተባብሏል።ምንጭ፦ '
- በኢትዮጵያ ፕሪምየር ሊግ ዘጠነኛ ሳምንት መቐለ 70 እንደርታ በሜዳው ሲዳማ ቡናን 3-1 ካሸነፈ በኋላ የሁለቱ ቡድኖች አሰልጣኞች
አስተያየታቸውን ሰጥተዋል። ” ሲዳማ ቡና በጥሩ ወቅታዊ አቋም የሚገኝ ቡድን በመሆኑ ጨዋታው ከባድ ነበር” – ገ/መድኅን ኃይሌ
– መቐለ 70 እንደርታስለ ጨዋታው” ጨዋታው ከባድ ነበር፤ ሲዳማ ቡና በጥሩ ወቅታዊ አቋም የሚገኝ ቡድን ነው ፤ የያዙት ነጥብም
ለዚህ ጨዋታ ጥሩ የስነልቦና ጥንካሬ አስገኝቶላቸዋል። በአንፃሩ እኛ አራት ጨዋታዎች ሳናሸንፍ ነው ወደ ጨዋታው የገባነው። በዚ
ምክንያት ጨዋታው አክብዶብን ነበር። በአጠቃላይ ጨዋታውን አሸንፈናል። በቀጣይ ጨዋታዎች ቀስ በቀሰ ወደ አሸናፊነት መጥተን ይህን
እናስቀጥላለን። ”“ዳኝነት ላይ ያየሁት ነገር ጥሩ አይደለም” ዘርዓይ ሙሉ – ሲዳማ ቡና ስለ ጨዋታው ” ከዕረፍት በፊት ከጨዋታ
ውጪ ኳሱ በኋላ ተጫዋቾቻችን መረጋጋት አልቻሉም። በጨዋታው አሳፋሪ ዳኝነት ነው ያየሁት። ስለ ጨዋታው ብጠይቀኝ አሳፋሪ እና
ሚዛናዊት የሌለው ዳኝነት ነው። የተቆጠርቡን ግቦች እኛ ላይ ጥፋት እየተፈፀሙ የተቆጠሩ ናቸው። ከጨዋታ ውጭ ሆኖም ግብ ይቆጠራል።
በቃ ይህንን ነው ያየሁት። ከዚ ውጭ ግን መቐለ ለማሸነፍ የነበረው ተነሳሽነት ጥሩ ነበር። እንደ ቡድን ተንቀሳቅሰዋል እኛም
የተሻለ ኳስ ተቆጣጥረን ተጫውተናል። እንዳያችሁት ኳሱን መስርተን ነው የወጣነው ግን በተለያዩ ስህተቶች ግብ ሲቆጠርብን የተጫዋቾቻችን
ብቃት አወረደው። የምንፈልገው እንቅስቃሴ ያላደረግነው በዳኞች ምክንያት ነው። ገና በሰባተኛ ደቂቃ ነው የተጀመረው ይሄ ነገር።
ጨዋታው ጥሩ ሆኖ ሳለ ሚዛኑ የጠበቀ ዳኝነት አላየንም። ዳኝነቱ ልክ ካልሆነ የጨዋታው እንቅስቃሴ እንዳለ ይበላሻል ይሄ ሁሉ
ደጋፊ የገባው ጥሩ ጨዋታ ለማየት ነው። ለምንድነው ተጫዋቾች ሮጠው ዳኛ ላይ የሚሄዱት። በተደጋጋሚ ስህተት ይሰራ ነበር። እኛ
ተጫዋቾቻችንን ብናረጋጋም የሚያደርጉት ስህተት ለሌላ ነገር የሚዳርግ ነበር። ዳኞቹ አቅም አንሷቸው ነው ብዬ አላስብም፤ ሆን
ተብሎ የተደረገ ነገር ነው። ዳኝነት ላይ ያየሁት ነገር ጥሩ አይደለም። መቐለን ግን እንደ ቡድን ጥሩ ነው እንኳን ደስ አላቹ
ማለት እፈልጋለው። ”ስለ ስታድየሙ ድባብ” ደጋፊው የሚደነቅ ደጋፊ ነው። በስርዓት ነው ቡድኑን የሚደግፈው። ምንም ነገር ቢፈጠር
ቡድኑን ነበር ሲደግፍ የነበረው። ”ዳኝነት ላይ ስለሰጠው አስተያየት” እኔ አዳላ አላልኩም። ግን ብቃት ማነስ ነው ብዬ አላስብም።
እነዚህ ሁሉ ግቦች እስኪቆጠሩ ብቃት ማነስ አይደለም። በአጠቃላይ ዳኝነቱ ሚዘናዊ አልነበረም። ሁሉም ግብ ላይ የዳኛ ተፅዕኖ
አለበት፤ በቃ ይሄን ነው የምለው። አንዱን ከጨዋታ ውጪ ብለህ አንዱን የምታፀድቅ ከሆነ ስህተት ነው። “
- source_sentence: የከምባታና ጠንባሮ አርሶአደሮች
sentences:
- በደሴ ማረሚያ ቤት በተደረገ የኮቪድ-19 ምርመራ 13 ሰዎች ቫይረሱ እንዳለባቸው ማረጋገጡን የከተማው ጤና መምሪያ አስታወቀ።የመምሪያው
ኃላፊ አቶ አብዱልሃሚድ ይመር በተለይ ለቢቢሲ እንዳስታወቁት 12ቱ የህግ ታራሚዎች ሲሆኑ ሌላኛው ደግሞ የማረሚያ ቤቱ ባልደረባ
ናቸው።እንደ አቶ አብዱልሃሚድ ገለጻ ከሆነ ከማረሚያ ቤቱ ጋር በመነጋገርም አዲስ የሚገቡ ታራሚዎች ለ14 ቀናት ለብቻቸው እንዲቆዩ
ከማድረግ በተጨማሪ በመጨረሻዎቹ ቀናት ላይ ምርመራ ሲደረግላቸው ቆይቷል።ከሐምሌ 20 በኋላ ማረሚያ ቤቱ የገቡ 46 ታራሚዎች
ላይ በተደረገ ምርመራ 10 ሰዎች ኮሮናቫይረስ እንዳለባቸው ለማረጋገጥ ተችሏል።“ታራሚዎቹ ከተለያዩ አካባቢዎች የመጡ ናቸው።
ከተለያዩ ከደቡብ ወሎ ወረዳዎች እና ከደሴ ከተማም የተገኙ ናቸው” ብለዋል።በሁለተኛ ዙር 60 ሰዎች ላይ በተደረገ ምርመራ ሦስቱ
ቫይረሱ እንዳለባቸው ተረጋግጧል።በሁለተኛው ዙር ቫይረሱ ከተገኘባቸው መካከል በመጀመሪያው ዙር እንዳለባቸው ከታወቁ ሰዎች ጋር
ንክኪ የነበራቸው እና አንድ ማረሚያ ቤቱ ባልደረባ ይገኙበታል።የማረሚያ ቤቱን የሕግ ታራሚዎች እና ባልደረባዎችን በሙሉ ለመመርመር
መቻሉንም አቶ አብዱልሃሚድ አስታውቀዋል።ቫይረሱ የተገኘባቸው ቦሩ ሜዳ መጀመሪያ ደረጃ ሆስፒታል የተላኩ ሲሆን፤ ተጓዳኝ ህመም
ያለበት አንድ ታራሚ ካሳየው የህመም ምልክት ውጭ ሁሉም በጥሩ ሁኔታ ላይ እንደሚገኙ ተናግረዋል።በማረሚያ ቤቱ የቫይረሱ ስርጭት
እንዳይስፋፋ አዲስ የሚገቡትን እና ነባር ታራሚዎችን ከመመርመር ባለፈ የግንዛቤ ማስጨበጫ ሥራ፣ የኬሚካል ርጭት፣ ርቀትን ማስጠበቅ
እና ንጽህና የማስጠበቅ ሥራ እየተከናወነ ነው ብለዋል።ባለፉት ወራት በአማራ ክልል በተደረገ የኮሮናቫይረስ ምርመራ 83 አሽከርካሪዎች
እና ረዳቶቻቸው ቫይረሱ ተገኝቶባቸዋል።በክልሉ ቫይረሱ ከተገኘባቸው ሰዎች መካካል 23 የህክምና ባለሙያዎች እንደሚገኙበትም ከአማራ
ህብረተሰብ ጤና ኢንስቲትዩት ያገኘነው መረጃ ያሳያል።በአጠቃላይ በኢትዮጵያ በኮቪድ-19 የተያዙ ሰዎች ቁጥር 25,118 የደረሱ
ሲሆን የሟቾች ቁጥር 463 ደርሷል። እንዲሁም አጠቃላይ ከበሽታው ያገገሙ ሰዎች 11,034 ደርሰዋል።
- 'በደቡብ ክልል ከፋ ዞን ዴቻ ወረዳ ከ20 ሺህ በላይ የከምባታና ጠምባሮ አርሶአደሮች በማንነታችን ጥቃት ደርሶብናል በማለት
እየተፈናቀሉ ናቸው፡፡አርሶአደሮቹ የተፈናቀሉት ከሶስት ሳምንት በፊት በወረዳው ከ30 በላይ ሲቪሎች በታጠቁ ግለሰቦች በአሰቃቂ
ሁኔታ መገደላቸውን ተከትሎ ነው ተብሏል፡፡ጉዳያችንን ለክልሉ መንግሥት ብናሳውቅም ችላ ተብለናል ሲሉ አርሶአደቹ ተናግረዋል።
አሁን ለችግር መጋለጣቸውንም ለቪኦኤ አስረድተዋል፡፡የከምባታ ጠንባሮ ዞን በበኩሉ የተፈናቀሉ ዜጎች በስቃይ ላይ መሆናቸውን ገልጦ
መፍትሔ እየተፈለገ መሆኑን አስታውቋል፡፡ '
- ባሕር ዳር፡ መስከረም 7/2012 ዓ.ም (አብመድ) በጣልያን ባሕር ዳርቻ ጠባቂዎች ሕይወታቸው የተረፉ 90 ስደተኞችን ማልታ
ለመቀበል ተስማማች፡፡በቀጣዩ ሳምንት ደግሞ በአዲስ የስደተኞች መከፋፈያ አሠራር ዘዴ ላይ የአውሮፓ ኅብረት ሊመክር ነው፡፡የማልታ
የሕይወት አድን ትብብር ማዕከል በጠየቀው መሠረት ትናንት የጣልያን ባሕር ዳርቻ ጠባቂ ቡድን ስደተኞቹን ታድጓል፡፡ ከሊቢያ የባሕር
ክልል ውጭ እየሰመጠች ከነበረች ጀልባ ነው ስደተኞቹን ማትረፍ የተቻለው፡፡ ማልታ በመጀመሪያ ስደተኞቹን ወደ ሀገሯ ለማስገባት
ፈቃደኛ አልሆነችም ነበር፡፡
- source_sentence: የአዲስ አበባ ከተማ አስተዳደር የጀመረው ኦዲት ወደ ባለ ኮከብ ሆቴሎችና ኢንዱስትሪዎች ተሸጋገረ
sentences:
- የኢትዮጵያ እግር ኳስ ፌዴሬሽን ከኢትዮጵያ ብሮድካስቲንግ ኮርፖሬሽን (EBC) ጋር በተፈራረመው የመግባቢያ ሰነድ ስምምነት ዙሪያ
ከፕሪሚየር ሊግ ክለቦች ጋር ነገ ከጠዋቱ 4፡00 ጀምሮ በኢንተርኮንትኔንታል ሆቴል ውይይት ያካሂዳል፡፡በውይይቱ ፌዴሬሽኑና EBC
የኢትዮጵያ ፕሪሚየር ሊግ ጨዋታዎችን በቀጥታ የተሌቭዥን ስርጭት አማካኝነት በመላ ኢትዮጵያ ተደራሽ ለማድረግ ነሃሴ 6/2007
ዓ.ም የተፈራረሙትን የመግባቢያ ሰነድ አስመልክቶ ስለ ስምምነቱ ፋይዳና ሂደት ገለፃ የሚደረግ ሲሆን ከፕሪሚየር ሊግ ክለቦች
ለሚነሱ ጥያቄዎች ማብራሪያ ይሰጣል፡፡ በክለቦች መብትና ተጠቃሚነት ዙሪያም ግልጽ ውይይት ይካሄዳል፡፡ስምምነቱ ይፋ መደረጉንና
መፈረሙን ተከትሎ ከተለያዩ በላድርሻ አከላት የተነሱት ጥያቄዎች በተለይም የኢትዮጵያ ቡና ስፖርት ክለብ በደብዳቤ አቋሙን የገለጸበት
አግባብ ተቀባይነት እንዳለው ታምኖበታል፡፡ ነገ ከጠዋቱ 4፡00 ጀምሮ የሚካሄደውና የፕሪሚየር ሊግ ክለቦች ፕሬዝዳንቶች እና
ስራ አስኪያጆች የሚሳተፉበት የውይይት መድረክ ስምምነቱን አስመልክቶ ሊነሱ የሚችሉትን ጥያቄዎች በመቀበል የማስተካካያ ርምጃ
ለመውሰድ የሚያስችል በመሆኑ ሁሉም ክለቦች የውይይቱ ተሳታፊ እንዲሆኑ ፌዴሬሽኑ ጥሪውን አስተላልፋል፡፡ፌዴሬሽኑና ኢቢሲ አለም
አቀፍና የሀገር ውስጥ ጨዋታዎችን በቴሌቭዥን የቀጥታ ስርጭት ለማስተላለፍ የተፈራረሙት የመግባቢያ ሰነድ ዓላማዎች በዋነኝነት
የወጣቱን ትውልድ የእግር ኳስ ስፖርት ተነሳሽነት ማሳደግ፣ የብሔራዊ እና አገር ውስጥ ውድድሮችን የቀጥታ ስርጭት ተደራሽነት
ማረጋገጥ እንዲሁም ለእግር ኳስ ስፖርት ዘላቂና አስተማማኝ እድገት አመቺ ሁኔታዎችን በመፍጠር ላይ እንደሚመሰረት መገለጹ ይታወሳል፡፡ማስታወሻ፡-
በውይይቱ የሚሳተፉት የፌዴሬሽኑ የስራ ሃላፊዎችና የክለቦች ተወካዮች ብቻ ናቸው፡፡
- ለመጀመርያ ጊዜ በተሟላ ደረጃ መሬትና መሬት ነክ ይዞታዎችን ኦዲት በማድረግ ላይ የሚገኘው የአዲስ አበባ ከተማ አስተዳደር፣
የኦዲት አድማሱን በማስፋት በባለ ኮከብ ሆቴሎችና በኢንዱስትሪዎች ላይ ቆጠራ ሊያካሂድ ነው፡፡ የአዲስ አበባ ከተማ አስተዳደር
ከ1995 ዓ.ም. ጀምሮ እስከ ኅዳር 2004 ዓ.ም. የከተማ ቦታ በሊዝ ስለመያዝ የሚደነግገው እስኪወጣበት ጊዜ ድረስ፣ ላለፉት
15 ዓመታት በኢንዱስትሪ ዞኖችና በተናጠል ለሚካሄዱ ፋብሪካዎች በርካታ ቦታዎችን ሰጥቷል፡፡ ከዚህ በተጨማሪ ለበርካታ ሆቴሎች
ግንባታ የሚሆን ሰፋፊ ቦታዎችንም እንዲሁ አቅርቧል፡፡ነገር ግን አስተዳደሩ በሰጣቸው ቦታዎች ላይ ስለተከናወነው ልማትም ሆነ፣
የተከናወኑት ግንባታዎች በውላቸው መሠረት ስለመካሄዳቸው በትክክል የተጠናቀረ መረጃ እንደሌለ ይገልጻል፡፡በከተማው ውስጥ የሚገኙ
አምራች ኢንዱስትሪዎችንና ባለ ኮከብ ሆቴሎችን ቁጥር ለማወቅ፣ በአግባቡ ሥራዎችን ባላካሄዱት ላይ ደግሞ የማስተካከያ ዕርምጃ
ለመውሰድ ኦዲት እንደሚከናወን ለማወቅ ተችሏል፡፡የአዲስ አበባ ከተማ አስተዳደር ምክትል ከንቲባ ታከለ ኡማ (ኢንጂነር) ለሪፖርተር፣
‹‹እስካሁን ግንባታ ሳይካሄድባቸው ለዓመታት ታጥረው የቆዩ ከአራት ሚሊዮን ካሬ ሜትር በላይ ቦታ መልሰን ወስደናል፤›› ብለዋል፡፡‹‹‹ይህ
ትልቅ ሥራ ነው፤›› በማለት ምክትል ከንቲባው ገልጸው፣ በቀጣይ ደግሞ በሆቴሎች፣ በኢንዱስትሪዎች፣ በድንጋይ ማምረቻ ካባዎች፣
እንዲሁም በመኖሪያ ቤቶች ላይ ኦዲት ተካሂዶ ዕርምጃ ይወሰዳል ሲሉ ገልጸዋል፡፡ ‹‹ሥራው ውስብስብ በመሆኑ የሚካሄደው ኦዲት
አንዴ ብቻ ሳይሆን ሦስት፣ አራት ጊዜ ይታያል፡፡ ካስፈለገም የማረጋገጡን ሥራ ማዕከላዊ ስታትስቲክስ ኤጀንሲ ሊያከናውን ይችላል፤››
በማለት ምክትል ከንቲባው አስረድተዋል፡፡በአዲስ አበባ ከተማ አምራች ኢንዱስትሪዎች፣ ሆቴሎች፣ ለድንጋይ ማውጪያ የተሰጡ ቦታዎች
ያሉበት ወቅታዊ ሁኔታ በትክክል አይታወቅም፡፡ ለእነዚህ ዘርፎች የቀረበው ቦታ ለታለመለት ዓላማ በትክክል ስለመዋሉ፣ ከዘርፉ
የሚመነጨው ኢኮኖሚም ሆነ የተፈጠረው የሥራ ዕድል ሽፋን እምብዛም አይታወቅም፡፡ይህንን ሥራ በተሻለ ደረጃ ለመሥራት የከተማው
ኢንዱስትሪ ቢሮ ከማዕከላዊ ስታትስቲክስ ኤጀንሲ ጋር በጋራ ለመሥራትም መስማማታቸው ታውቋል፡፡ የማዕከላዊ ስታትስቲክስ ኤጀንሲ
የቢዝነስ ስታትስቲክስ ዳይሬክተር አቶ ዘለዓለም ኃይለ ጊዮርጊስ፣ በሆቴሎችና በኢንዱስትሪዎች ላይ ቆጠራውን ለማካሄድ ሙሉ ዝግጅት
እየተደረገ መሆኑን ለሪፖርተር ገልጸው፣ በጉዳዩ ላይ ዝርዝር መረጃ ከመስጠት ተቆጥበዋል፡፡
- ጠቅላይ ሚኒስትር ዶክተር አብይ አህመድ ለተለያዩ የመንግስት የስራ ሀላፊዎች ሹመት መስጠታቸውን የጠቅላይ ሚኒስቴር ጽህፈት ቤት
አስታውቋል።በጠቅላይ ሚኒስትር ጽህፈት ቤት መግለጫ መሰረት፦ 1.ዶክተር አምባቸው መኮንን፦ የጠቅላይ ሚንስትሩ የመሰረተ ልማትና
የከተማ ልማት አማካሪ ሚንስትር 2.አቶ ገብረእግዚአብሔር አርአያ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት
ረዳት ተጠሪ 3.አቶ ጫኔ ሽመካ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት ረዳት ተጠሪ 4.አቶ ጫላ
ለሚ፦ በሚንስትር ዴኤታ ማዕረግ በህዝብ ተወካዮች ምክር ቤት የመንግስት ረዳት ተጠሪ5.አቶ ተስፋሁን ጎበዛይ፦ የጠቅላይ ሚንስትሩ
የብሔራዊ ደህንነት ጉዳዮች አማካሪ ሚንስትር ዴኤታ6.ብርጋዴል ጄኔራል አህመድ ሀምዛ፦ የብረታ ብረት ኢንጂነሪንግ ኮርፖሬሽን
ዋና ዳይሬክተር7.አቶ ሞቱማ መቃሳ፦ የጠቅላይ ሚንስትሩ የብሔራዊ ደህንነት ጉዳዮች አማካሪ ሚንስትር ዴኤታ8.አቶ ከበደ ይማም፦
የአካባቢ ጥበቃ ደንና የአየር ንብረት ለውጥ ኮሚሽን ምክትል ኮሚሽነር9.አቶ አዘዘው ጫኔ፦ የጉምሩክ ኮሚሽን ምክትል ኮሚሽነር10.አቶ
አወል አብዲ፦ የብረታ ብረት ኢንጂነሪንግ ኮርፖሬሽን ምክትል ዋና ዳይሬክተር11.አቶ ሙሉጌታ በየነ፦ የጉምሩክ ኮሚሽን ምክትል
ኮሚሽነር12. ዶክተር ፅጌረዳ ክፍሌ፦ የብሔራዊ ኤች. አይ. ቪ/ኤድስ መከላከያና መቆጣጠሪያ ጽ/ቤት ዋና ዳይሬክተር13.ወይዘሮ
ያምሮት አንዱዓለም፦ የአርማወር ሐሰን የምርምር ኢንስቲትዩት ምክትል ዋና ዳይሬክተር14.ዶክተር ሚዛን ኪሮስ፦ የኢትዮጵያ ጤና
መድህን ኤጀንሲ ዋና ዳይሬክተር15.አቶ ሀሚድ ከኒሶ፦ የሰነዶች ማረጋገጫና ምዝገባ ኤጀንሲ ምክትል ዋና ዳይሬክተር16.አቶ ከበደ
ጫኔ፦ የስደተኞችና ከስደት ተመላሾች ጉዳይ ኤጀንሲ ዋና ዳይሬክተር17.ወይዘሮ ምስራቅ ማሞ፦ የጉምሩክ ኮሚሽን ምክትል ኮሚሽነር
ሆነው ተሹመዋል።
- source_sentence: በቁጥጥር ስር የዋሉ የህወሓት ታጣቂዎች ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ ከመሆን
እንዲቆጠቡ አስገነዘቡ
sentences:
- 'የፕሬዚዳንት ዶናልድ ትራምፕ ተቺዎች እንደሚሉት፤ ፕሬዚዳንቱ ለዘመናት የአሜሪካ ወዳጆች በሆኑት ኢትዮጵያ እና ግብፅ መካከል
ታላቁ የሕዳሴ ግድብን በተመለከተ ውጥረት ቀስቅሰዋል።ይህም በአሜሪካ እና በአፍሪካ የዲፕሎማሲ ታሪክ ትልቁ የትራምፕ ስህተት
ነው ይላሉ።ትራምፕ ከቀናት በፊት ግብፅ "ግድቡን ልታፈነዳው ትችላለች" ማለታቸው ይታወሳል። ጥር ላይ ፕሬዚዳንቱ "ስምምነት
መፍጠር ችያለሁ፤ ከባድ ጦርነትም አስቁሜያለሁ" ብለው የኖቤል የሰላም ሽልማት እንደሚገባቸው መናገራቸው ይታወሳል።ነገር ግን
ተሸላሚ የሆኑት ጠቅላይ ሚንስትር ዐብይ አሕመድ ነበሩ ።ትራምፕ የኖቤል የሰላም ሽልማት እንደሚገባቸው ሲናገሩ ጉዳዩን ግልፅ
ባያደርጉትም፤ በግብፁ ፕሬዘዳንት አብዱልፈታህ አል-ሲሲ ጥሪ መሠረት በኢትዮጵያ እና በግብፅ መካከል ጣልቃ ስለመግባታቸው እየተናገሩ
እንደነበረ ይታመናል።ትራምፕ በአንድ ወቅት አብዱልፈታህ አል-ሲሲን "የኔ ምርጡ አምባገነን" ማለታቸው አይዘነጋም።ግብፅ ታላቁ
ሕዳሴ ግድብ "ለደህንነቴ ያሰጋኛል" ትላለች። ሱዳንም የግብፅን ያህል ባይሆንም ስጋቱን ትጋራለች። በሌላ በኩል ኢትዮጵያ የኃይል
አመንጪውን ግድብ አስፈላጊነት አስረግጣ ትገልጻለች።ኬንያ የሚገኘው የአፍሪካ ቀንድ የጸጥታ ጉዳይ ተንታኝ ረሺድ አብዲ እንደሚለው፤
በግድቡ ዙሪያ ኢትዮጵያ እና ግብፅን ለማደራደር አሜሪካ ጣልቃ መግባቷ የሁለቱን አገሮች ውጥረት አባብሷል።"ኢትዮጵያ በግድቡ
አቅራቢያ የጸጥታ ኃይሏን እያጠናከረች ነው። ቤንሻንጉል ጉሙዝ ክልልን ከበረራ ውጪ ማድረጓ አንዱ ማሳያ ነው። በግድቡ ዙሪያ
በረራ የሚያግድ መሣሪያም ተገጥሟል። ግብፅ የወታደራዊ ቅኝት በረራ ልታደርግ እንደምትችል ከመስጋት የመነጨ ሊሆን ይችላል" ይላል።ተንታኙ
እንደሚናገረው፤ ትራምፕ ዓለም አቀፍ ዲፕሎማሲ እንዴት እንደሚሠራ የሚገነዘቡ አይመስልም።"በንግዱ ዓለም እንደሚደረገው ስምምነት
ላይ መድረስ ይቻላል የሚል የተዛባ አመለካከት አላቸው። የውጪ ጉዳይ መያዝ ያለበትን ጉዳይ ግምዣ ቤት ድርድሩን እንዲመራ ያደረጉትም
ለዚህ ነው። ከመነሻውም መጥፎ የነበረውን ሁኔታም አባብሶታል" ሲልም ረሺድ ያስረዳል።ኢትዮጵያ ከግብፅ እና ከሱዳን ጋር ያለው
ድርድር ሳይቋጭ ግድቡን ለመሙላት በመወሰኗ አሜሪካ የ100 ሚሊዮን ዶላር እርዳታ ማጠፏ ተዘግቧል።ረሺድ "ኢትዮጵያ አሜሪካ እንደከዳቻት
ይሰማታል። ብዙ ኢትዮጵያውያን ትራምፕን የጥላቻ ምልክት አድርገውታል" በማለት ሁኔታውን ይገልጻል።የዴሞክራት እጩው ጆ ባይደን
እንዲያሸንፉም የበርካታ ኢትዮጵያውያን ምኞት ነው።አሜሪካ የሚገኘው ሴንተር ፎር ግሎባል ዴቨሎፕመንት ውስጥ የፖሊሲ አጥኚ ደብሊው
ጉዬ ሙር እንደሚሉት፤ የትራምፕ አስተዳደር እስራኤልና የአረብ ሊግ አገራት መካከል ሰላም መፍጠር ስለሚፈልግ ከግብፅ ጎን መቆሙ
የሚጠበቅ ነው።ግብፅ ከእስራኤል ጋር ዘመናት ያስቆጠረ ዲፕሎማሲያዊ ትስስር አላት። ትራምፕ የአረብ ሊግ አገራት ለእስራኤል እውቅና
እንዲሰጡ ጥረት እያደረጉ ስለሆነ አብዱልፈታህ አል-ሲሲን ማስቀየም አይፈልጉም።ሙር እንደሚናገሩት፤ የትራምፕ አስተዳደር በግድቡ
ዙርያ ለግብፅ የወገነውም በዚህ ምክንያት ነው።ትራምፕ ሱዳንን በተመለከተ የደረሱበት ውሳኔ የአረቡን አገራት ከእስራኤል ጋር
ለማስስማት የሚያደርጉት ጥረት አንድ አካል ነው።ሱዳን ከእስራኤል ጋር ስምምነት ለማድረግ ወስናለች።በእርግጥ የአገሪቱ ተጠባባቂ
የውጪ ጉዳይ ሚንስትር ውሳኔው ገና በሕግ አውጪ መጽደቅ እንዳለበት ቢናገሩም፤ ሱዳን እንደ ጎርጎሮሳውያኑ 1967 ላይ የአረብ
ሊግ አገራት ውይይት ማስተናገዷ መዘንጋት የለበትም። በውይይቱ "ከእስራኤል ጋር መቼም ሰላም አይፈጠርም። መቼም ቢሆን ለእስራኤል
እውቅና አይሰጥም። ድርድርም አይካሄድም" ተብሎም ነበር።ሱዳን ከእስራኤል ጋር ለመስማማት በመፍቀዷ ትራምፕ ሽብርን ከሚድፉ አገሮች
ዝርዝር እንደሚያስወጧት ተናግረዋል። ይህም ለምጣኔ ሀብቷ ማገገም የሚረዳ ድጋፍ እንድታገኝ ያግዛታል።ትራምፕ በድጋሚ ከተመረጡ
ኢትዮጵያ ግድቡን በተመለከተ ሱዳን እና ግብፅ ላላቸው ስጋት አንዳች መልስ እንድትሰጥ ጫና እንደሚያደርጉ ይጠበቃል።አጥኚው እንደሚሉት፤
ሱዳን ሽብርን ከሚደግፉ አገሮች ዝርዝር ከወጣች የትራምፕ አስተዳደር በምላሹ የሚጠብቀው ነገር አለ።"ከእስራኤል ጋር ስምምነት
የመፍጠር ጉዳይ የሱዳን ማኅበረሰብን የከፋፈለ ነው። መንግሥት የራሱ የጸጥታ ጥያቄዎች እያሉበት ይህን ውሳኔ ማሳለፉ ችግር ሊያስከትል
ይችላል" ብለዋል። ትራምፕ አፍሪካን በተመለከተ የሚያራምዱት ፖሊሲ፤ በአሜሪካ እና በቻይና መካከል የሚካሄድ ''አዲሱ ቀዝቃዛ
ጦርነት'' ነው ሲል ረሺድ ይገልጸዋል።ለምሳሌ ቻይና ከግዛቷ ውጪ የመጀመሪያውን ወታደራዊ መቀመጫ የከፈተችው በጅቡቲ ነው። ማዕከሉ
የሚገኘው አሜሪካ የሶማሊያ ታጣቂዎች ላይ የአየር ጥቃት ለመሰንዘር ያቋቋመችው ማዕከል አቅራቢያ ነው።በቅርቡ የአሜሪካ ተዋጊ
ጀቶች ለማረፍ ሲሞክሩ፤ ቻይና የአሜሪካውያን ወታደሮችን እይታ የሚጋርድ መሣሪያ መሞከሯን ረሺድ ያጣቅሳል። "የትራምፕ አስተዳደር
ጸረ ቻይና ፖሊስ ያራምዳል" የሚለው ተንታኙ ሁኔታው ለአፍሪካ ቀንድ አስቸጋሪ መሆኑንም ያስረዳል።ቻይና አፍሪካ ውስጥ ያላትን
የንግድ የበላይነት ለመቀልበስ፤ የትራምፕ አስተዳደር ''ፕሮስፔሪቲ አፍሪካ ኢን 2018'' የተባለ ፖሊሲ ነድፏል።በአፍሪካ እና
በአሜሪካ መካከል የሚካሄደውን ንግድ በእጥፍ የማሳደግ እቅድ አለ። አምና የአሜሪካ መንግሥት የንግድ ተቋሞች አፍሪካ ውስጥ እንዲሠሩ
የገንዘብ ድጋፍ የሚሰጥበት አሠራር ዘርግቷል።ሙር እንደሚሉት፤ የአሜሪካ ድርጅቶች ከቻይና ተቋሞች ጋር መወዳደር አልቻልንም ብለው
ቅሬታ ስላሰሙ የገንዘብ ድጋፍ ለመስጠት ተወስኗል። "የአይቲ ዘርፍ እንደ ማሳያ ቢወሰድ፤ 70 በመቶ የአፍሪካ ኢንፎርሜሽን ቴክኖሎጂ
የተመሠረተው በቻይና ድርጅቶች ላይ ነው" ሲሉ ያብራራሉ። የትራምፕ አስተዳደር በ2025 የሚያበቃውን ከ30 በላይ የአፍሪካ አገሮች
ተጠቃሚ እንዲሆኑበት ታስቦ በአሜሪካ ለአፍሪካውያን የተሰጠው ከታሪፍና ከቀረጥ ነፃ የገበያ ዕድል (አፍሪካ ግሮዝ ኤንድ ኦፖርቹኒቲ
አክት-አጎዋ) የመሰረዝ እቅድ አለው። ለአፍሪካ ምርቶች የአሜሪካን ገበያ ክፍት የሚያደርገው ስምምነት የተፈረመው በቢል ክሊንተን
ነበር።አሜሪካ አሁን ላይ ትኩረቷ የሁለትዮሽ የንግድ ስምምነት እንደሆነ ሙር ይናገራሉ። ለምሳሌ ከኬንያ ጋር ንግግር እየተካሄደ
ነው።ኬንያ፤ የቻይና ''ቤልት ኤንድ ሮድ ኢኒሽየቲቭ'' አካል እንደሆነች ይታወቃል። ስምምነቱ ቻይናን ከአፍሪካ ጋር በንግድ
የሚያስተሳስርና የቻይና ዓለም አቀፍ ተደማጭነት የሚያጎላ እንደሆነ አሜሪካ ታምናለች።ትራምፕ ከኬንያ ጋር በቀጥታ ከተስማሙ በኋላ
ተመሳሳይ መንገድ ተጠቅመው ከሌሎች የአፍሪካ አገሮች ጋር የመሥራት ውጥን እንዳላቸው ሙር ይናገራሉ።ይህ የትራምፕ መንገድ፤ ከአፍሪካ
ሕብረት የንድግና ኢንዱስትሪ ኮሚሽነር አልበርት ሙቻንጋን ሐሳብ ጋር ይጣረሳል።እሳቸው የአፍሪካ አገራት በተናጠል ሳይሆን በአንድነት
ከአሜሪካ ጋር ስምምነት እንዲያደርጉ ይፈልጋሉ። ሙር እንደሚሉት፤ የአሜሪካ ውሳኔ የአፍሪካ ሕብረት የአህጉሪቱን ምጣኔ ሀብት
ለማጣመር ከሚያደርገው ጥረት ጋር ይጣረሳል።ሕብረቱ፤ አፍሪካን የዓለም ትልቋ ነጻ የንግድ ቀጠና የማድረግ አላማ አለው።ትራምፕ
ግን በጥምረት ከሚሠሩ ኃይሎች ጋር በጋራ ያለመደራደር አዝማሚያ ያሳያሉ ሲሉ አጥኚው ያክላሉ።የትራምፕ ተቀናቃኝ ጆ ባይደን ካሸነፉ
የአፍሪካ ፖሊሲያቸው ምን እንደሚሆን እስካሁን አልገለጹም።"የባይደን አስተዳደር በኦባማ ጊዜ ወደነበረው ሂደት ሊመለስ ይችላል"
ይላሉ ሙር። '
- አዲስ አበባ፣ ጥር 2፣ 2013(ኤፍ ቢ ሲ) የጋምቤላ ክልል ወጣት የሴራ ፖለቲካ አራማጆችን በዝምታ አይመለከቱም ሲል የክልሉ
ብልጽግና ፓርቲ ወጣቶች ሊግ ሰብሳቢ ወጣት ራች ጎች ገለጸ።የክልሉ የብልጽግና ፓርቲ ወጣቶች ሊግ የውይይት መድረክ ትናንት ተካሂዷል።ከአሁን
በፊት በነበረው የፖለቲካ ሴራ ወጣቱም ሆነ መላው የክልሉ ህዝብ ተጠቃሚ ሳይሆን ቆይቷል ያለው ሰብሳቢው ይህንን የህዝብ ጥቅም
የማያረጋግጥ የፖለቲካ ሴራ አካሄድ የክልሉ ወጣት እንደማይቀበለው ገልጿል።የክልሉ ህዝብ እኩል ተጠቃሚ የመሆን ዕድል ማግኘቱን
አስታውሶ፤ “በቀጣይ የሴራ ፖለቲካ አራማጆችን ወጣቱ በዝምታ አይመለከትም” ብሏል።የሊጉ ምክትል ሰብሳቢ ወጣት ኡጁሉ ቢሩ በበኩሉ
“ከአሁን በጎጥና በመንደር በመከፋፈል አንድነቱን ለመሸርሽር ሲሰራ ነበር” ብሏል።ህዝቡ ልዩነቶች እንዳማያስፈልጉ በመረዳቱ በክልሉ
ሰላም መረጋገጡን ጠቅሶ፤ “በቀጣይ በሚስማሙና በሚያግባቡ ጎዳዮች ዙሪያ እንሰራለን” ሲል ተናግሯል።የመድረኩ ተሳታፊ ወጣቶችም
ሀገርን ማልማትና ማሳደግ በሚያስችሉ ጉዳዮች ላይ ትኩረት ማድረግ እንደሚገባ በመግለጽ ሐሳብ አንስተዋል።ለዘንድሮ ምርጫ ሰላማዊ
ሂደትና ለተጀመረው የብልጽግና ጉዞ ስኬታማነት የበኩላቸውን አስተዋጽኦ ለማበርከት ዝግጁ መሆናቸውንም አረጋግጠዋል።ከጽንፈኝነትና
ከብሄርተኝነት አስተሳሰቦች በመውጣት መንግስት በጀመራቸው የሰላም፣ የዴምክራሲና የልማት ስራዎች በንቃት ለመሳተፍ ዝግጁ እንደሆኑ
መግለፃቸውን ኢዜአ ዘግቧል።የክልሉ ብልጽግና ፓርቲ ጽህፈት ቤት ኃላፊ አቶ ላክደር ላክባክ ፤ በሀገሪቱ እየተካሄደ ያለውን ሁለንተናዊ
ለውጥና የብልፅግና ጉዞ እውን ለማድረግ ወጣቱ ኃይል የማይተካ ሚና አለው ብለዋል።ከፌስቡክ ገፃችን በተጨማሪ ወቅታዊ፣ ትኩስ
እና የተሟሉ መረጃዎችን ለማግኘት፡-የፋና ድረ ገጽ ይጎብኙ፤ተንቀሳቃሽ ምስሎችን ለማግኘት የፋና ቴሌቪዥን የዩቲዩብ ቻናል ሰብስክራይብ
ያድርጉፈጣን መረጃዎችን ለማግኘት ትክክለኛውን የፋና ቴሌግራም ቻናል ይቀላቀሉከዚህ በተጨማሪም በትዊተር ገጻችን ይወዳጁንዘወትር
ከእኛ ጋር ስላሉ እናመሰግናለን!
- አዲስ አበባ ፣ ህዳር 1 ፣ 2013 (ኤፍ ቢ ሲ) ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ መሆን የለባቸውም
ሲሉ በቁጥጥር ስር የዋሉ የጽንፈኛው ህወሓት ቡድን ታጣቂዎች ገለጹ።ከአንድ ሳምንት በፊት በትግራይ ክልል በነበረው የመከላከያ
ሰራዊት ሰሜን ዕዝ ላይ በህወሓት ቡድን የተፈጸመውን ጥቃት ተከትሎ የሃገር መከላከያ ሠራዊት በጠቅላይ ሚኒስትር ዐቢይ አሕመድ
በተሰጠው ሃገርን የማዳን ተልዕኮ ሕግ ለማስከበር የዘመቻ ሥራዎችን እያከናወነ ይገኛል።የሠራዊቱ 5ኛ ሜካናይዝድ ክፍለ ጦር የህወሓትን
ታጣቂዎች በቁጥጥር ስር አውሏል።በቁጥጥር ስር የዋሉት ታጣቂዎች የትግራይ ልዩ ኃይልን የተቀላቀሉት ኑሯቸውን አሸንፈው ለማደግ
እንጂ ከሃገር መከላከያ ሠራዊት ጋር ለመዋጋት አለመሆኑን ገልጸዋል።ኑሮን ለማሸነፍ በሚል ወደ ልዩ ኃይሉ ቢገቡም የህወሓት የጥፋት
ቡድን እኩይ ዓላማ ማስፈጸሚያ ከመሆን ውጪ ያገኙት ነገር አለመኖሩን ነው የተናገሩት።ከሃገር መከላከያ ሠራዊት ጋር መጋጨት ማለት
ከኢትዮጵያ ጋር መጋጨት መሆኑንም ገልጸዋል።የትግራይ ልዩ ኃይል እና ወጣትም የህወሓት የጥፋት ቡድን ሰላባ እንዳይሆኑ ከሃገር
መከላከያ ሠራዊቱ ጎን መቆም እንዳለባቸው ተናግረዋል።ታጣቂዎቹ በቁጥጥር ስር ከዋሉ በኋላ በሃገር መከላከያ ሠራዊቱ የደረሰባቸው
ምንም አይነት ችግር እንደሌለና በአሁኑ ወቅት በጥሩ ሁኔታ ላይ እንደሚገኙም አስረድተዋል።የሃገር መከላከያ ሠራዊት እያከናወነ
ባለው ዘመቻ የትግራይ ልዩ ኃይልና ሚሊሻ አባላት በቁጥጥር ስር እየዋሉ መሆኑን ኢዜአ ዘግቧል።
datasets:
- Desalegnn/amharic-passage-retrieval-dataset
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: Snowflake Arctic Embed L Amharic
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 1024
type: dim_1024
metrics:
- type: cosine_accuracy@1
value: 0.7564303287855066
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8848132408857079
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9172444643256542
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9416237978080966
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7564303287855066
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2949377469619026
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18344889286513083
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09416237978080964
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7564303287855066
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8848132408857079
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9172444643256542
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9416237978080966
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8547186854586896
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8262166590337033
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8282607268472338
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.7454708118989041
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.87877432341758
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9118765376873182
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9398344889286513
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7454708118989041
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.29292477447252663
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18237530753746362
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09398344889286513
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7454708118989041
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.87877432341758
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9118765376873182
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9398344889286513
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.848356501861952
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.818424822400444
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8204738239167285
name: Cosine Map@100
---
# Snowflake Arctic Embed L Amharic
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) on the [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/Desalegnn/amharic-passage-retrieval-dataset) dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l-v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0) <!-- at revision ac6544c8a46e00af67e330e85a9028c66b8cfd9a -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/Desalegnn/amharic-passage-retrieval-dataset)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Desalegnn/Desu-snowflake-arctic-embed-l-v2.0-finetuned-amharic-45k")
# Run inference
queries = [
"\u1260\u1241\u1325\u1325\u122d \u1235\u122d \u12e8\u12cb\u1209 \u12e8\u1205\u12c8\u1213\u1275 \u1273\u1323\u1242\u12ce\u127d \u120d\u12e9 \u1283\u12ed\u1209\u1293 \u12c8\u1323\u1271 \u12e8\u1325\u134b\u1275 \u1261\u12f5\u1291 \u12a5\u12a9\u12ed \u12d3\u120b\u121b \u121b\u1235\u1348\u1338\u121a\u12eb \u12a8\u1218\u1206\u1295 \u12a5\u1295\u12f2\u1246\u1320\u1261 \u12a0\u1235\u1308\u1290\u12d8\u1261",
]
documents = [
'አዲስ አበባ ፣ ህዳር 1 ፣ 2013 (ኤፍ ቢ ሲ) ልዩ ኃይሉና ወጣቱ የጥፋት ቡድኑ እኩይ ዓላማ ማስፈጸሚያ መሆን የለባቸውም ሲሉ በቁጥጥር ስር የዋሉ የጽንፈኛው ህወሓት ቡድን ታጣቂዎች ገለጹ።ከአንድ ሳምንት በፊት በትግራይ ክልል በነበረው የመከላከያ ሰራዊት ሰሜን ዕዝ ላይ በህወሓት ቡድን የተፈጸመውን ጥቃት ተከትሎ የሃገር መከላከያ ሠራዊት በጠቅላይ ሚኒስትር ዐቢይ አሕመድ በተሰጠው ሃገርን የማዳን ተልዕኮ ሕግ ለማስከበር የዘመቻ ሥራዎችን እያከናወነ ይገኛል።የሠራዊቱ 5ኛ ሜካናይዝድ ክፍለ ጦር የህወሓትን ታጣቂዎች በቁጥጥር ስር አውሏል።በቁጥጥር ስር የዋሉት ታጣቂዎች የትግራይ ልዩ ኃይልን የተቀላቀሉት ኑሯቸውን አሸንፈው ለማደግ እንጂ ከሃገር መከላከያ ሠራዊት ጋር ለመዋጋት አለመሆኑን ገልጸዋል።ኑሮን ለማሸነፍ በሚል ወደ ልዩ ኃይሉ ቢገቡም የህወሓት የጥፋት ቡድን እኩይ ዓላማ ማስፈጸሚያ ከመሆን ውጪ ያገኙት ነገር አለመኖሩን ነው የተናገሩት።ከሃገር መከላከያ ሠራዊት ጋር መጋጨት ማለት ከኢትዮጵያ ጋር መጋጨት መሆኑንም ገልጸዋል።የትግራይ ልዩ ኃይል እና ወጣትም የህወሓት የጥፋት ቡድን ሰላባ እንዳይሆኑ ከሃገር መከላከያ ሠራዊቱ ጎን መቆም እንዳለባቸው ተናግረዋል።ታጣቂዎቹ በቁጥጥር ስር ከዋሉ በኋላ በሃገር መከላከያ ሠራዊቱ የደረሰባቸው ምንም አይነት ችግር እንደሌለና በአሁኑ ወቅት በጥሩ ሁኔታ ላይ እንደሚገኙም አስረድተዋል።የሃገር መከላከያ ሠራዊት እያከናወነ ባለው ዘመቻ የትግራይ ልዩ ኃይልና ሚሊሻ አባላት በቁጥጥር ስር እየዋሉ መሆኑን ኢዜአ ዘግቧል።',
'የፕሬዚዳንት ዶናልድ ትራምፕ ተቺዎች እንደሚሉት፤ ፕሬዚዳንቱ ለዘመናት የአሜሪካ ወዳጆች በሆኑት ኢትዮጵያ እና ግብፅ መካከል ታላቁ የሕዳሴ ግድብን በተመለከተ ውጥረት ቀስቅሰዋል።ይህም በአሜሪካ እና በአፍሪካ የዲፕሎማሲ ታሪክ ትልቁ የትራምፕ ስህተት ነው ይላሉ።ትራምፕ ከቀናት በፊት ግብፅ "ግድቡን ልታፈነዳው ትችላለች" ማለታቸው ይታወሳል። ጥር ላይ ፕሬዚዳንቱ "ስምምነት መፍጠር ችያለሁ፤ ከባድ ጦርነትም አስቁሜያለሁ" ብለው የኖቤል የሰላም ሽልማት እንደሚገባቸው መናገራቸው ይታወሳል።ነገር ግን ተሸላሚ የሆኑት ጠቅላይ ሚንስትር ዐብይ አሕመድ ነበሩ ።ትራምፕ የኖቤል የሰላም ሽልማት እንደሚገባቸው ሲናገሩ ጉዳዩን ግልፅ ባያደርጉትም፤ በግብፁ ፕሬዘዳንት አብዱልፈታህ አል-ሲሲ ጥሪ መሠረት በኢትዮጵያ እና በግብፅ መካከል ጣልቃ ስለመግባታቸው እየተናገሩ እንደነበረ ይታመናል።ትራምፕ በአንድ ወቅት አብዱልፈታህ አል-ሲሲን "የኔ ምርጡ አምባገነን" ማለታቸው አይዘነጋም።ግብፅ ታላቁ ሕዳሴ ግድብ "ለደህንነቴ ያሰጋኛል" ትላለች። ሱዳንም የግብፅን ያህል ባይሆንም ስጋቱን ትጋራለች። በሌላ በኩል ኢትዮጵያ የኃይል አመንጪውን ግድብ አስፈላጊነት አስረግጣ ትገልጻለች።ኬንያ የሚገኘው የአፍሪካ ቀንድ የጸጥታ ጉዳይ ተንታኝ ረሺድ አብዲ እንደሚለው፤ በግድቡ ዙሪያ ኢትዮጵያ እና ግብፅን ለማደራደር አሜሪካ ጣልቃ መግባቷ የሁለቱን አገሮች ውጥረት አባብሷል።"ኢትዮጵያ በግድቡ አቅራቢያ የጸጥታ ኃይሏን እያጠናከረች ነው። ቤንሻንጉል ጉሙዝ ክልልን ከበረራ ውጪ ማድረጓ አንዱ ማሳያ ነው። በግድቡ ዙሪያ በረራ የሚያግድ መሣሪያም ተገጥሟል። ግብፅ የወታደራዊ ቅኝት በረራ ልታደርግ እንደምትችል ከመስጋት የመነጨ ሊሆን ይችላል" ይላል።ተንታኙ እንደሚናገረው፤ ትራምፕ ዓለም አቀፍ ዲፕሎማሲ እንዴት እንደሚሠራ የሚገነዘቡ አይመስልም።"በንግዱ ዓለም እንደሚደረገው ስምምነት ላይ መድረስ ይቻላል የሚል የተዛባ አመለካከት አላቸው። የውጪ ጉዳይ መያዝ ያለበትን ጉዳይ ግምዣ ቤት ድርድሩን እንዲመራ ያደረጉትም ለዚህ ነው። ከመነሻውም መጥፎ የነበረውን ሁኔታም አባብሶታል" ሲልም ረሺድ ያስረዳል።ኢትዮጵያ ከግብፅ እና ከሱዳን ጋር ያለው ድርድር ሳይቋጭ ግድቡን ለመሙላት በመወሰኗ አሜሪካ የ100 ሚሊዮን ዶላር እርዳታ ማጠፏ ተዘግቧል።ረሺድ "ኢትዮጵያ አሜሪካ እንደከዳቻት ይሰማታል። ብዙ ኢትዮጵያውያን ትራምፕን የጥላቻ ምልክት አድርገውታል" በማለት ሁኔታውን ይገልጻል።የዴሞክራት እጩው ጆ ባይደን እንዲያሸንፉም የበርካታ ኢትዮጵያውያን ምኞት ነው።አሜሪካ የሚገኘው ሴንተር ፎር ግሎባል ዴቨሎፕመንት ውስጥ የፖሊሲ አጥኚ ደብሊው ጉዬ ሙር እንደሚሉት፤ የትራምፕ አስተዳደር እስራኤልና የአረብ ሊግ አገራት መካከል ሰላም መፍጠር ስለሚፈልግ ከግብፅ ጎን መቆሙ የሚጠበቅ ነው።ግብፅ ከእስራኤል ጋር ዘመናት ያስቆጠረ ዲፕሎማሲያዊ ትስስር አላት። ትራምፕ የአረብ ሊግ አገራት ለእስራኤል እውቅና እንዲሰጡ ጥረት እያደረጉ ስለሆነ አብዱልፈታህ አል-ሲሲን ማስቀየም አይፈልጉም።ሙር እንደሚናገሩት፤ የትራምፕ አስተዳደር በግድቡ ዙርያ ለግብፅ የወገነውም በዚህ ምክንያት ነው።ትራምፕ ሱዳንን በተመለከተ የደረሱበት ውሳኔ የአረቡን አገራት ከእስራኤል ጋር ለማስስማት የሚያደርጉት ጥረት አንድ አካል ነው።ሱዳን ከእስራኤል ጋር ስምምነት ለማድረግ ወስናለች።በእርግጥ የአገሪቱ ተጠባባቂ የውጪ ጉዳይ ሚንስትር ውሳኔው ገና በሕግ አውጪ መጽደቅ እንዳለበት ቢናገሩም፤ ሱዳን እንደ ጎርጎሮሳውያኑ 1967 ላይ የአረብ ሊግ አገራት ውይይት ማስተናገዷ መዘንጋት የለበትም። በውይይቱ "ከእስራኤል ጋር መቼም ሰላም አይፈጠርም። መቼም ቢሆን ለእስራኤል እውቅና አይሰጥም። ድርድርም አይካሄድም" ተብሎም ነበር።ሱዳን ከእስራኤል ጋር ለመስማማት በመፍቀዷ ትራምፕ ሽብርን ከሚድፉ አገሮች ዝርዝር እንደሚያስወጧት ተናግረዋል። ይህም ለምጣኔ ሀብቷ ማገገም የሚረዳ ድጋፍ እንድታገኝ ያግዛታል።ትራምፕ በድጋሚ ከተመረጡ ኢትዮጵያ ግድቡን በተመለከተ ሱዳን እና ግብፅ ላላቸው ስጋት አንዳች መልስ እንድትሰጥ ጫና እንደሚያደርጉ ይጠበቃል።አጥኚው እንደሚሉት፤ ሱዳን ሽብርን ከሚደግፉ አገሮች ዝርዝር ከወጣች የትራምፕ አስተዳደር በምላሹ የሚጠብቀው ነገር አለ።"ከእስራኤል ጋር ስምምነት የመፍጠር ጉዳይ የሱዳን ማኅበረሰብን የከፋፈለ ነው። መንግሥት የራሱ የጸጥታ ጥያቄዎች እያሉበት ይህን ውሳኔ ማሳለፉ ችግር ሊያስከትል ይችላል" ብለዋል። ትራምፕ አፍሪካን በተመለከተ የሚያራምዱት ፖሊሲ፤ በአሜሪካ እና በቻይና መካከል የሚካሄድ \'አዲሱ ቀዝቃዛ ጦርነት\' ነው ሲል ረሺድ ይገልጸዋል።ለምሳሌ ቻይና ከግዛቷ ውጪ የመጀመሪያውን ወታደራዊ መቀመጫ የከፈተችው በጅቡቲ ነው። ማዕከሉ የሚገኘው አሜሪካ የሶማሊያ ታጣቂዎች ላይ የአየር ጥቃት ለመሰንዘር ያቋቋመችው ማዕከል አቅራቢያ ነው።በቅርቡ የአሜሪካ ተዋጊ ጀቶች ለማረፍ ሲሞክሩ፤ ቻይና የአሜሪካውያን ወታደሮችን እይታ የሚጋርድ መሣሪያ መሞከሯን ረሺድ ያጣቅሳል። "የትራምፕ አስተዳደር ጸረ ቻይና ፖሊስ ያራምዳል" የሚለው ተንታኙ ሁኔታው ለአፍሪካ ቀንድ አስቸጋሪ መሆኑንም ያስረዳል።ቻይና አፍሪካ ውስጥ ያላትን የንግድ የበላይነት ለመቀልበስ፤ የትራምፕ አስተዳደር \'ፕሮስፔሪቲ አፍሪካ ኢን 2018\' የተባለ ፖሊሲ ነድፏል።በአፍሪካ እና በአሜሪካ መካከል የሚካሄደውን ንግድ በእጥፍ የማሳደግ እቅድ አለ። አምና የአሜሪካ መንግሥት የንግድ ተቋሞች አፍሪካ ውስጥ እንዲሠሩ የገንዘብ ድጋፍ የሚሰጥበት አሠራር ዘርግቷል።ሙር እንደሚሉት፤ የአሜሪካ ድርጅቶች ከቻይና ተቋሞች ጋር መወዳደር አልቻልንም ብለው ቅሬታ ስላሰሙ የገንዘብ ድጋፍ ለመስጠት ተወስኗል። "የአይቲ ዘርፍ እንደ ማሳያ ቢወሰድ፤ 70 በመቶ የአፍሪካ ኢንፎርሜሽን ቴክኖሎጂ የተመሠረተው በቻይና ድርጅቶች ላይ ነው" ሲሉ ያብራራሉ። የትራምፕ አስተዳደር በ2025 የሚያበቃውን ከ30 በላይ የአፍሪካ አገሮች ተጠቃሚ እንዲሆኑበት ታስቦ በአሜሪካ ለአፍሪካውያን የተሰጠው ከታሪፍና ከቀረጥ ነፃ የገበያ ዕድል (አፍሪካ ግሮዝ ኤንድ ኦፖርቹኒቲ አክት-አጎዋ) የመሰረዝ እቅድ አለው። ለአፍሪካ ምርቶች የአሜሪካን ገበያ ክፍት የሚያደርገው ስምምነት የተፈረመው በቢል ክሊንተን ነበር።አሜሪካ አሁን ላይ ትኩረቷ የሁለትዮሽ የንግድ ስምምነት እንደሆነ ሙር ይናገራሉ። ለምሳሌ ከኬንያ ጋር ንግግር እየተካሄደ ነው።ኬንያ፤ የቻይና \'ቤልት ኤንድ ሮድ ኢኒሽየቲቭ\' አካል እንደሆነች ይታወቃል። ስምምነቱ ቻይናን ከአፍሪካ ጋር በንግድ የሚያስተሳስርና የቻይና ዓለም አቀፍ ተደማጭነት የሚያጎላ እንደሆነ አሜሪካ ታምናለች።ትራምፕ ከኬንያ ጋር በቀጥታ ከተስማሙ በኋላ ተመሳሳይ መንገድ ተጠቅመው ከሌሎች የአፍሪካ አገሮች ጋር የመሥራት ውጥን እንዳላቸው ሙር ይናገራሉ።ይህ የትራምፕ መንገድ፤ ከአፍሪካ ሕብረት የንድግና ኢንዱስትሪ ኮሚሽነር አልበርት ሙቻንጋን ሐሳብ ጋር ይጣረሳል።እሳቸው የአፍሪካ አገራት በተናጠል ሳይሆን በአንድነት ከአሜሪካ ጋር ስምምነት እንዲያደርጉ ይፈልጋሉ። ሙር እንደሚሉት፤ የአሜሪካ ውሳኔ የአፍሪካ ሕብረት የአህጉሪቱን ምጣኔ ሀብት ለማጣመር ከሚያደርገው ጥረት ጋር ይጣረሳል።ሕብረቱ፤ አፍሪካን የዓለም ትልቋ ነጻ የንግድ ቀጠና የማድረግ አላማ አለው።ትራምፕ ግን በጥምረት ከሚሠሩ ኃይሎች ጋር በጋራ ያለመደራደር አዝማሚያ ያሳያሉ ሲሉ አጥኚው ያክላሉ።የትራምፕ ተቀናቃኝ ጆ ባይደን ካሸነፉ የአፍሪካ ፖሊሲያቸው ምን እንደሚሆን እስካሁን አልገለጹም።"የባይደን አስተዳደር በኦባማ ጊዜ ወደነበረው ሂደት ሊመለስ ይችላል" ይላሉ ሙር። ',
'አዲስ አበባ፣ ጥር 2፣ 2013(ኤፍ ቢ ሲ) የጋምቤላ ክልል ወጣት የሴራ ፖለቲካ አራማጆችን በዝምታ አይመለከቱም ሲል የክልሉ ብልጽግና ፓርቲ ወጣቶች ሊግ ሰብሳቢ ወጣት ራች ጎች ገለጸ።የክልሉ የብልጽግና ፓርቲ ወጣቶች ሊግ የውይይት መድረክ ትናንት ተካሂዷል።ከአሁን በፊት በነበረው የፖለቲካ ሴራ ወጣቱም ሆነ መላው የክልሉ ህዝብ ተጠቃሚ ሳይሆን ቆይቷል ያለው ሰብሳቢው ይህንን የህዝብ ጥቅም የማያረጋግጥ የፖለቲካ ሴራ አካሄድ የክልሉ ወጣት እንደማይቀበለው ገልጿል።የክልሉ ህዝብ እኩል ተጠቃሚ የመሆን ዕድል ማግኘቱን አስታውሶ፤ “በቀጣይ የሴራ ፖለቲካ አራማጆችን ወጣቱ በዝምታ አይመለከትም” ብሏል።የሊጉ ምክትል ሰብሳቢ ወጣት ኡጁሉ ቢሩ በበኩሉ “ከአሁን በጎጥና በመንደር በመከፋፈል አንድነቱን ለመሸርሽር ሲሰራ ነበር” ብሏል።ህዝቡ ልዩነቶች እንዳማያስፈልጉ በመረዳቱ በክልሉ ሰላም መረጋገጡን ጠቅሶ፤ “በቀጣይ በሚስማሙና በሚያግባቡ ጎዳዮች ዙሪያ እንሰራለን” ሲል ተናግሯል።የመድረኩ ተሳታፊ ወጣቶችም ሀገርን ማልማትና ማሳደግ በሚያስችሉ ጉዳዮች ላይ ትኩረት ማድረግ እንደሚገባ በመግለጽ ሐሳብ አንስተዋል።ለዘንድሮ ምርጫ ሰላማዊ ሂደትና ለተጀመረው የብልጽግና ጉዞ ስኬታማነት የበኩላቸውን አስተዋጽኦ ለማበርከት ዝግጁ መሆናቸውንም አረጋግጠዋል።ከጽንፈኝነትና ከብሄርተኝነት አስተሳሰቦች በመውጣት መንግስት በጀመራቸው የሰላም፣ የዴምክራሲና የልማት ስራዎች በንቃት ለመሳተፍ ዝግጁ እንደሆኑ መግለፃቸውን ኢዜአ ዘግቧል።የክልሉ ብልጽግና ፓርቲ ጽህፈት ቤት ኃላፊ አቶ ላክደር ላክባክ ፤ በሀገሪቱ እየተካሄደ ያለውን ሁለንተናዊ ለውጥና የብልፅግና ጉዞ እውን ለማድረግ ወጣቱ ኃይል የማይተካ\xa0 ሚና አለው ብለዋል።ከፌስቡክ ገፃችን በተጨማሪ ወቅታዊ፣ ትኩስ እና የተሟሉ መረጃዎችን ለማግኘት፡-የፋና ድረ ገጽ ይጎብኙ፤ተንቀሳቃሽ ምስሎችን ለማግኘት የፋና ቴሌቪዥን የዩቲዩብ ቻናል ሰብስክራይብ ያድርጉፈጣን መረጃዎችን ለማግኘት ትክክለኛውን የፋና ቴሌግራም ቻናል ይቀላቀሉከዚህ በተጨማሪም በትዊተር ገጻችን ይወዳጁንዘወትር ከእኛ ጋር ስላሉ እናመሰግናለን!',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 1024] [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[ 0.7659, -0.0879, 0.1750]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_1024`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 1024
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7564 |
| cosine_accuracy@3 | 0.8848 |
| cosine_accuracy@5 | 0.9172 |
| cosine_accuracy@10 | 0.9416 |
| cosine_precision@1 | 0.7564 |
| cosine_precision@3 | 0.2949 |
| cosine_precision@5 | 0.1834 |
| cosine_precision@10 | 0.0942 |
| cosine_recall@1 | 0.7564 |
| cosine_recall@3 | 0.8848 |
| cosine_recall@5 | 0.9172 |
| cosine_recall@10 | 0.9416 |
| **cosine_ndcg@10** | **0.8547** |
| cosine_mrr@10 | 0.8262 |
| cosine_map@100 | 0.8283 |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7455 |
| cosine_accuracy@3 | 0.8788 |
| cosine_accuracy@5 | 0.9119 |
| cosine_accuracy@10 | 0.9398 |
| cosine_precision@1 | 0.7455 |
| cosine_precision@3 | 0.2929 |
| cosine_precision@5 | 0.1824 |
| cosine_precision@10 | 0.094 |
| cosine_recall@1 | 0.7455 |
| cosine_recall@3 | 0.8788 |
| cosine_recall@5 | 0.9119 |
| cosine_recall@10 | 0.9398 |
| **cosine_ndcg@10** | **0.8484** |
| cosine_mrr@10 | 0.8184 |
| cosine_map@100 | 0.8205 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### amharic-passage-retrieval-dataset
* Dataset: [amharic-passage-retrieval-dataset](https://huggingface.co/datasets/Desalegnn/amharic-passage-retrieval-dataset) at [e7be243](https://huggingface.co/datasets/Desalegnn/amharic-passage-retrieval-dataset/tree/e7be2430fc785999074dee8dbac1c3e466449442)
* Size: 40,237 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 23.09 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 76 tokens</li><li>mean: 507.11 tokens</li><li>max: 1024 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>ሚንስትር ዴኤታ ወይዘሮ አለም-ፀሀይ የአርባ ምንጭ ሆስፒታልና የኮቪድ-19 ሕክምና ማዕከልን ጎበኙ</code> | <code>አዲስ አበባ፣ መስከረም 13፣ 2013 (ኤፍ.ቢ.ሲ) የጤና ሚኒስቴር ሚንስትር ዴኤታ ወይዘሮ አለምፀሀይ ጳውሎስ በደቡብ ክልል ጋሞ ዞን የአርባ ምንጭ ከተማ ሆስፒታል እና ጤና ጣቢያ ጎብኙ፡፡እንዲሁም በኮቪድ-19 የህክምና ማዕከል ተገኝተው ያለውን የስራ እንቅስቃሴ መመልከታቸውም ተገልጸል፡፡ሚኒስትር ዴኤታዋ በጉብኝቱ ወቅት የህክምና ተቋማቱ ለአካባቢ ነዋሪዎች እየሰጡ ያለውን ዘርፈ ብዙ አገልግሎት እና ለኮቪድ 19 ወረርሽኝ የመከላከልና የመቆጣጠር ምላሽ አሠጣጥ የሚበረታታና ውጤታማ እንደሆነ ተናግረዋል፡፡በዚህም ለማዕከሉ ሰራተኞች ምስጋናቸውን አቅርበዋል፡፡የተቋማቱ ስራ ኃላፊዎችም ከሚኒስትር ዴኤታዋ ጋር መወያየታቸው ተሰምቷል፡፡ኃላፊዎቹ አገልግሎታቸውን በተሟላ መንገድ ለመስራት አያስችሉንም ያሏቸውን ጉድለቶች አንስተው ውይይት አድረገውባቸዋል፡፡የህክምና ተቋማቱ ያሉበት የስራ አፈጻጸም የሚበረታታ ቢሆንም ለተሻለ ስራ መነሳትና የጤና አገልግሎቱን ይበልጥ ማሻሻል ያስፈልጋል ሲሉ ሚኒስትር ዴኤታዋ ማሳሰባቸውን ከሚኒስቴሩ ያገኘነው መረጃ ያመለክታል፡፡</code> |
| <code>መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠየቁ</code> | <code>መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠይቀዋል፡፡የሰላም ሚኒስቴር ከሳይንስና ከፍተኛ ትምህርት ሚኒስቴርና የኢትዮጵያ መምህራን ማህበር ጋር በመተባበር ያዘጋጁት ሀገር አቀፍ መምህራን የሰላም ውይይት መድረክ በአዲስ አበባ እየተካሄደ ነው፡፡በዚህ የውይይት መድረክ ላይ የሰላም ሚኒስትሯ ወይዘሮ ሙፈሪያት ካሚልን ጨምሮ ሌሎች ባለድርሻ አካላት ተገኝተዋል፡፡ውይይቱ “ሰላምና ሀገር ወዳድ መምህራኖች ፤ ሰላምና ሀገር ወዳድ ተማሪዎችን ያፈራሉ” በሚል መሪ ቃል እየተካሄደ የሚገኝ ሲሆን መምህራን በትምህርት ቤቶችና በአከባቢያቸው ሰላም እንዲረጋገጥ የበኩላቸውን ሚና እንዲወጡ ተጠይቀዋል፡፡በውይይቱ ንግግር ያደረጉት የሰላም ሚኒስትር ወይዘሮ ሙፈሪያት ካሚል መምህራን ትውልድን መቅረጽ ካላቸው እድል አንፃር ሰላምን በመስበክ በኩል ከፍተኛ አስተዋጽኦ ሊያበርክቱ ይገባል ብለዋል፡፡ሀገራዊ ግንባታ ትምህርትና የተሟላ ስብዕና የሚጠይቅ በመሆኑም ለማህበረሰብ ስብዕናና የበለጸገ ትውልድን በመፍጠር ረገድ የመምህራን ሚና ክፍተኛ መሆኑንም ተናግረዋል።ትምህርት ቤቶች የሰላም ማዕድ ይሆኑ ዘንድም መምህራን እያከናዎኑት ያለውን ትውልድን የመቅረጽ ተግባር አጠናክረው መቀጠል እንዳለባቸውም ወይዘሮ ሙፈሪያት አሳስበዋል፡፡ በውይይቱ ላይ አስተያየት የሰጡት መምህራን በበኩላቸው ሰላም ሁሉንም የሚመለከት ጉዳይ በመሆኑ ሰላምን በመስበክና በማረጋገጥ ረገድ ከመንግስት ጋር በመሆን የሚጠበቅባቸውን ኃላፊነት እንደሚወጡ ገልጸዋል፡፡በተለይም የስነ ዜጋ፣ ስነ ምግባርና የታሪክ ትምህርት መምህራን ለተማሪዎች በሚያቀርቡት ትምህርት ላይ ሚዛናዊና ኃላፊነት በተሞላበት መንገድ ማቅረብ እንዳለባቸውም ጠቁመዋል፡፡ መምህሩ በስነ ምግባር አርዓያ በመሆን ሰላምና ግብ...</code> |
| <code>የኢትዮጵያ እና ማሊ ከ17 አመት በታች ብሄራዊ ቡድኖች ጨዋታ እሁድ ይካሄዳል</code> | <code>በአዲስ አበባ ስታድየም እየተዘጋጀ የሚገኘው ብሄራዊ ቡድኑ በዛሬው የልምምድ መርሃ ግብር በእሁዱ ጨዋታ ላይ ቋሚ ተሰላፊዎች ይሆናሉ ተብለው የሚገመቱትን በመለየት የቅንጅትና ከርቀት አክርሮ የመምታት ልምምዶችን አከናውኗል፡፡ባለፉት ሶስት ቀናት በመጠነኛ ጉዳት በልምምድ ወቅት አቋርጠው ሲወጡ የነበሩት ሳሙኤል ተስፋዬ እና አቡበከር ነስሩ በዛሬው ልምምድ ከቡድኑ ጋር ሙሉ ልምምድ የሰሩ ሲሆን ሁሉም ተጨዋቾች በሙሉ ጤንነት ላይ ይገኛሉ፡፡ከ17 አመት ቡድናችን እሁድ ዕለት ከአፍሮ ፅዮን ጋር ባደረጉት የአቋም መፈተሻ ጨዋታ ላይ ከአፍሮፅዮን በኩል መልካም እንቅስቃሴ ያሳዩ 6 ተጨዋቾች ጥሪ ቀርቦላቸው በዛሬው ልምምድ ላይ ተገኝተው ከቡድኑ ጋር ልምምድ ያደረጉ ቢሆንም አሳማኝ እንቅስቃሴ ባለማሳየታቸው እንዲመለሱ ተደርጓል፡፡ቀይ ቀበሮዎቹ በእሁዱ ጨዋታ በባማኮ የደረሰባቸውን የ2-0 ሽንፈት ቀልብሰው ወደ ማዳጋስካር የአፍሪካ ከ17 አመት በታች ዋንጫ ለማምራት በከፍተኛ ተነሳሽነት እና ፍላጎት ዝግጅታቸውን በማከናወን ላይ እንደሚገኙ ለመታዘብ ችለናል፡፡በኢትዮጵያ እና ማሊ መካከል የሚደረገው ጨዋታ እሁድ መስከረም 22 ቀን 2009 በአዲስ አበባ ስታድየም 10:00 ላይ የሚካሄድ ሲሆን ጨዋታው የሚካሄድበት የአዲስ አበባ ስታድየም ሜዳን ምቹ ለማድረግ የሚያስችሉ ስራዎች እየተከናወኑ ይገኛሉ፡፡የእሁዱ ተጋጣሚያችን የማሊ ከ17 አመት በታች ብሄራዊ ቡድን አርብ አዲስ አበባ ይገባል፡፡ ጨዋታውን የሚመሩት አራቱም ዳኞች ከኒጀር ፤ ኮሚሽነሩ ደግሞ ከዩጋንዳ እንደተመደቡም ታውቋል፡፡</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
1024,
256
],
"matryoshka_weights": [
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `gradient_accumulation_steps`: 8
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_1024_cosine_ndcg@10 | dim_256_cosine_ndcg@10 |
|:-------:|:--------:|:-------------:|:-----------------------:|:----------------------:|
| -1 | -1 | - | 0.7570 | 0.7425 |
| 1.0 | 315 | 0.0758 | 0.8321 | 0.8217 |
| 2.0 | 630 | 0.0258 | 0.8394 | 0.8319 |
| 3.0 | 945 | 0.0121 | 0.8510 | 0.8441 |
| **4.0** | **1260** | **0.0081** | **0.8547** | **0.8484** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.56.2
- PyTorch: 2.8.0+cu126
- Accelerate: 1.10.1
- Datasets: 4.1.1
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
kduxinw/gpt2
|
kduxinw
| 2025-09-22T12:12:29Z | 0 | 0 | null |
[
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"onnx",
"safetensors",
"gpt2",
"exbert",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T11:59:08Z |
---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
This is the **smallest** version of GPT-2, with 124M parameters.
**Related Models:** [GPT-Large](https://huggingface.co/gpt2-large), [GPT-Medium](https://huggingface.co/gpt2-medium) and [GPT-XL](https://huggingface.co/gpt2-xl)
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
Team-Atom/act_pick_and_place
|
Team-Atom
| 2025-09-22T12:09:17Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"act",
"robotics",
"dataset:Team-Atom/pick_and_place",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-22T12:09:04Z |
---
datasets: Team-Atom/pick_and_place
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- act
- lerobot
- robotics
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Aeronicr/LPWAN
|
Aeronicr
| 2025-09-22T12:08:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T12:08:33Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Aeronicr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
selsar/identities_minorities
|
selsar
| 2025-09-22T12:03:05Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-19T09:41:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nabbers1999/Llama-4-Scout-17B-16E-Instruct-abliterated-v2-bnb-4bit
|
Nabbers1999
| 2025-09-22T12:02:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama4_text",
"text-generation",
"quantized",
"4bit",
"bitsandbytes",
"generated_from_original",
"conversational",
"base_model:jiangchengchengNLP/Llama-4-Scout-17B-16E-Instruct-abliterated-v2",
"base_model:quantized:jiangchengchengNLP/Llama-4-Scout-17B-16E-Instruct-abliterated-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2025-09-22T11:50:37Z |
---
license: apache-2.0
base_model: jiangchengchengNLP/Llama-4-Scout-17B-16E-Instruct-abliterated-v2
tags:
- quantized
- 4bit
- bitsandbytes
- generated_from_original
library_name: transformers
---
# Nabbers1999/Llama-4-Scout-17B-16E-Instruct-abliterated-v2-bnb-4bit
This is a 4-bit quantized version of [jiangchengchengNLP/Llama-4-Scout-17B-16E-Instruct-abliterated-v2](https://huggingface.co/jiangchengchengNLP/Llama-4-Scout-17B-16E-Instruct-abliterated-v2) using BitsAndBytes.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"Nabbers1999/Llama-4-Scout-17B-16E-Instruct-abliterated-v2-bnb-4bit",
device_map="auto",
trust_remote_code=True
)
```
|
DeathGodlike/Lorablated-w2bb-psy-della_EXL3
|
DeathGodlike
| 2025-09-22T11:59:28Z | 0 | 0 |
safetensors
|
[
"safetensors",
"exl3",
"4-bit",
"6-bit",
"8-bit",
"text-generation",
"base_model:Retreatcost/Lorablated-w2bb-psy-della",
"base_model:quantized:Retreatcost/Lorablated-w2bb-psy-della",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-09-22T11:59:26Z |
---
license: apache-2.0
base_model:
- Retreatcost/Lorablated-w2bb-psy-della
base_model_relation: quantized
pipeline_tag: text-generation
library_name: safetensors
tags:
- exl3
- 4-bit
- 6-bit
- 8-bit
---
## EXL3 quants: [ [H8-4.0BPW](https://huggingface.co/DeathGodlike/Lorablated-w2bb-psy-della_EXL3/tree/H8-4.0BPW) | [H8-6.0BPW](https://huggingface.co/DeathGodlike/Lorablated-w2bb-psy-della_EXL3/tree/H8-6.0BPW) | [H8-8.0BPW](https://huggingface.co/DeathGodlike/Lorablated-w2bb-psy-della_EXL3/tree/H8-8.0BPW) ]
# Original model: [Lorablated-w2bb-psy-della](https://huggingface.co/Retreatcost/Lorablated-w2bb-psy-della) by [Retreatcost](https://huggingface.co/Retreatcost)
|
tomal66/gemma3-1b-sentiment-fpt-sft
|
tomal66
| 2025-09-22T11:58:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T11:58:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tarundachepally/EGL_granite_8b_linear_full
|
tarundachepally
| 2025-09-22T11:54:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T11:44:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
liluckyl/Qwen3-0.6B-Gensyn-Swarm-dense_fast_bobcat
|
liluckyl
| 2025-09-22T11:53:18Z | 45 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am dense_fast_bobcat",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T11:07:26Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am dense_fast_bobcat
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GreatBird/ViTP
|
GreatBird
| 2025-09-22T11:50:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-22T07:29:10Z |
# Introduction
Modern computer vision is converging on a closed loop in which perception, reasoning and generation mutually reinforce each other. However, this loop remains incomplete: the top-down influence of high-level reasoning on the foundational learning of low-level perceptual features is not yet underexplored. This paper addresses this gap by proposing a new paradigm for pretraining foundation models in downstream domains. We introduce **V**isual **i**ns**T**ruction **P**retraining (**ViTP**), a novel approach that directly leverages reasoning to enhance perception. ViTP embeds a Vision Transformer (ViT) backbone within a Vision-Language Model and pretrains it end-to-end using a rich corpus of visual instruction data curated from target downstream domains. ViTP is powered by our proposed Visual Robustness Learning (VRL), which compels the ViT to learn robust and domain-relevant features from a sparse set of visual tokens. Extensive experiments on 16 challenging remote sensing and medical imaging benchmarks demonstrate that ViTP establishes new state-of-the-art performance across a diverse range of downstream tasks. The code is available at [GitHub](github.com/zcablii/ViTP).
----

The synergistic relationship between perception, generation, and reasoning in modern CV. Our proposed ViTP forges a novel link from high-level reasoning to low-level perception, a previously underexplored connection. ViTP sets new SOTA performance across a diverse range of downstream tasks in medical imaging and remote sensing.
----

A conceptual illustration of the ViTP framework. A ViT backbone is embedded within a large VLM and then pretrained with domain-specific instruction following objective and Visual Robustness Learning (VRL). This process instils high-level semantic understanding into the ViT. The resulting weights are then used to initialize models for various downstream perception tasks.
----
```bibtex
```
|
TakalinProgress/ppo-LunarLander-v2
|
TakalinProgress
| 2025-09-22T11:39:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-22T11:38:52Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.09 +/- 21.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AXERA-TECH/YOLOv8-Seg
|
AXERA-TECH
| 2025-09-22T11:33:04Z | 5 | 0 | null |
[
"onnx",
"Ultralytics",
"YOLOv8",
"YOLOv8-Seg",
"object-detection",
"en",
"base_model:Ultralytics/YOLOv8",
"base_model:quantized:Ultralytics/YOLOv8",
"license:mit",
"region:us"
] |
object-detection
| 2025-01-11T16:23:41Z |
---
license: mit
language:
- en
base_model:
- Ultralytics/YOLOv8
pipeline_tag: object-detection
tags:
- Ultralytics
- YOLOv8
- YOLOv8-Seg
---
# YOLOv8-Seg
This version of YOLOv8-Seg has been converted to run on the Axera NPU using **w8a16** quantization.
This model has been optimized with the following LoRA:
Compatible with Pulsar2 version: 3.4
## Convert tools links:
For those who are interested in model conversion, you can try to export axmodel through
- [The repo of AXera Platform](https://github.com/AXERA-TECH/ax-samples), which you can get the detial of guide
- [Pulsar2 Link, How to Convert ONNX to axmodel](https://pulsar2-docs.readthedocs.io/en/latest/pulsar2/introduction.html)
## Support Platform
- AX650
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
- AX630C
- [爱芯派2](https://axera-pi-2-docs-cn.readthedocs.io/zh-cn/latest/index.html)
- [Module-LLM](https://docs.m5stack.com/zh_CN/module/Module-LLM)
- [LLM630 Compute Kit](https://docs.m5stack.com/zh_CN/core/LLM630%20Compute%20Kit)
|Chips|yolov8s-seg|
|--|--|
|AX650| 4.6 ms |
|AX630C| TBD ms |
## How to use
Download all files from this repository to the device
```
root@ax650:~/YOLOv8-Seg# tree
.
|-- ax650
| `-- yolov8s-seg.axmodel
|-- ax_yolov8_seg
|-- football.jpg
`-- yolov8_seg_out.jpg
```
### Inference
Input image:

#### Inference with AX650 Host, such as M4N-Dock(爱芯派Pro)
```
root@ax650:~/samples/AXERA-TECH/YOLOv8-Seg# ./ax_yolov8_seg -m ax650/yolov8s_seg.axmodel -i football.jpg
--------------------------------------
model file : ax650/yolov8s_seg.axmodel
image file : football.jpg
img_h, img_w : 640 640
--------------------------------------
Engine creating handle is done.
Engine creating context is done.
Engine get io info is done.
Engine alloc io is done.
Engine push input is done.
--------------------------------------
input size: 1
name: images [UINT8] [BGR]
1 x 640 x 640 x 3
output size: 7
name: /model.22/Concat_1_output_0 [FLOAT32]
1 x 80 x 80 x 144
name: /model.22/Concat_2_output_0 [FLOAT32]
1 x 40 x 40 x 144
name: /model.22/Concat_3_output_0 [FLOAT32]
1 x 20 x 20 x 144
name: /model.22/cv4.0/cv4.0.2/Conv_output_0 [FLOAT32]
1 x 80 x 80 x 32
name: /model.22/cv4.1/cv4.1.2/Conv_output_0 [FLOAT32]
1 x 40 x 40 x 32
name: /model.22/cv4.2/cv4.2.2/Conv_output_0 [FLOAT32]
1 x 20 x 20 x 32
name: output1 [FLOAT32]
1 x 32 x 160 x 160
post process cost time:16.21 ms
--------------------------------------
Repeat 1 times, avg time 4.69 ms, max_time 4.69 ms, min_time 4.69 ms
--------------------------------------
detection num: 8
0: 92%, [1354, 340, 1629, 1035], person
0: 91%, [ 5, 359, 314, 1108], person
0: 91%, [ 759, 220, 1121, 1153], person
0: 88%, [ 490, 476, 661, 999], person
32: 73%, [1233, 877, 1286, 923], sports ball
32: 63%, [ 772, 888, 828, 937], sports ball
32: 63%, [ 450, 882, 475, 902], sports ball
0: 55%, [1838, 690, 1907, 811], person
--------------------------------------
```
Output image:

#### Inference with M.2 Accelerator card
```
(base) axera@raspberrypi:~/lhj/YOLOv8-Seg $ ./axcl_aarch64/axcl_yolov8_seg -m ax650/yolov8s_seg.axmodel -i football.jpg
--------------------------------------
model file : ax650/yolov8s_seg.axmodel
image file : football.jpg
img_h, img_w : 640 640
--------------------------------------
axclrtEngineCreateContextt is done.
axclrtEngineGetIOInfo is done.
grpid: 0
input size: 1
name: images
1 x 640 x 640 x 3
output size: 7
name: /model.22/Concat_1_output_0
1 x 80 x 80 x 144
name: /model.22/Concat_2_output_0
1 x 40 x 40 x 144
name: /model.22/Concat_3_output_0
1 x 20 x 20 x 144
name: /model.22/cv4.0/cv4.0.2/Conv_output_0
1 x 80 x 80 x 32
name: /model.22/cv4.1/cv4.1.2/Conv_output_0
1 x 40 x 40 x 32
name: /model.22/cv4.2/cv4.2.2/Conv_output_0
1 x 20 x 20 x 32
name: output1
1 x 32 x 160 x 160
==================================================
Engine push input is done.
--------------------------------------
post process cost time:3.67 ms
--------------------------------------
Repeat 1 times, avg time 4.85 ms, max_time 4.85 ms, min_time 4.85 ms
--------------------------------------
detection num: 8
0: 92%, [1354, 340, 1629, 1035], person
0: 91%, [ 5, 359, 314, 1108], person
0: 91%, [ 759, 220, 1121, 1153], person
0: 88%, [ 490, 476, 661, 999], person
32: 73%, [1233, 877, 1286, 923], sports ball
32: 63%, [ 772, 888, 828, 937], sports ball
32: 63%, [ 450, 882, 475, 902], sports ball
0: 55%, [1838, 690, 1907, 811], person
--------------------------------------
```
Output image:

|
felixZzz/fgktxwe7-step_00400
|
felixZzz
| 2025-09-22T11:04:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T11:02:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TAUR-dev/M-multitask_sftdata_cd3_lm3_ac4_lc4-sft
|
TAUR-dev
| 2025-09-22T11:03:58Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"region:us"
] | null | 2025-09-22T11:03:25Z |
# M-multitask_sftdata_cd3_lm3_ac4_lc4-sft
This model was created as part of the **multitask_sftdata_cd3_lm3_ac4_lc4** experiment using the SkillFactory experiment management system.
## Model Details
- **Training Method**: LLaMAFactory SFT (Supervised Fine-Tuning)
- **Stage Name**: sft
- **Experiment**: multitask_sftdata_cd3_lm3_ac4_lc4
## Training Configuration
{"model_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct", "trust_remote_code": true, "stage": "sft", "do_train": true, "finetuning_type": "full", "deepspeed": "/home/ubuntu/skill-factory/thirdparty/LLaMA-Factory/examples/deepspeed/ds_z2_config.json", "dataset": "TAUR_dev__multitask_sftdata_cd3_lm3_ac4_lc4", "template": "qwen", "cutoff_len": 16384, "max_samples": 1000000, "overwrite_cache": true, "preprocessing_num_workers": 1, "dataloader_num_workers": 0, "disable_tqdm": false, "output_dir": "/data4/tmp/sedrick/skillfactory/temp/llamafactory/checkpoints", "logging_steps": 10, "save_steps": 100000, "plot_loss": true, "overwrite_output_dir": true, "per_device_train_batch_size": 1, "gradient_accumulation_steps": 1, "learning_rate": 1e-06, "num_train_epochs": 1, "lr_scheduler_type": "cosine", "warmup_ratio": 0.05, "weight_decay": 0.0001, "adam_beta1": 0.9, "adam_beta2": 0.95, "bf16": true, "ddp_timeout": 180000000, "gradient_checkpointing": true, "save_only_model": true, "enable_masked_ranges": false, "save_strategy": "steps", "save_total_limit": 5, "sf_tracker_dataset_id": "TAUR-dev/D-ExpTracker__multitask_sftdata_cd3_lm3_ac4_lc4__v1", "sf_eval_before_training": false, "sf_wandb_project": "multitask_sftdata_cd3_lm3_ac4_lc4_sft", "sf_eval_steps": null, "run_name": "multitask_sftdata_cd3_lm3_ac4_lc4_sft"}
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__multitask_sftdata_cd3_lm3_ac4_lc4__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-multitask_sftdata_cd3_lm3_ac4_lc4-sft")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-multitask_sftdata_cd3_lm3_ac4_lc4-sft")
```
|
Accountable-SA/gemma-3-270m-it-base-Q4_K_M-GGUF
|
Accountable-SA
| 2025-09-22T11:03:17Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Accountable-SA/gemma-3-270m-it-base",
"base_model:quantized:Accountable-SA/gemma-3-270m-it-base",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T11:03:13Z |
---
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
base_model: Accountable-SA/gemma-3-270m-it-base
---
# massimogiuseppe/gemma-3-270m-it-base-Q4_K_M-GGUF
This model was converted to GGUF format from [`Accountable-SA/gemma-3-270m-it-base`](https://huggingface.co/Accountable-SA/gemma-3-270m-it-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Accountable-SA/gemma-3-270m-it-base) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q4_K_M-GGUF --hf-file gemma-3-270m-it-base-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q4_K_M-GGUF --hf-file gemma-3-270m-it-base-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q4_K_M-GGUF --hf-file gemma-3-270m-it-base-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo massimogiuseppe/gemma-3-270m-it-base-Q4_K_M-GGUF --hf-file gemma-3-270m-it-base-q4_k_m.gguf -c 2048
```
|
DBD-research-group/Bird-MAE-Base
|
DBD-research-group
| 2025-09-22T10:58:34Z | 488 | 0 |
transformers
|
[
"transformers",
"safetensors",
"feature-extraction",
"audio-classification",
"audio",
"custom_code",
"dataset:DBD-research-group/BirdSet",
"arxiv:2504.12880",
"region:us"
] |
audio-classification
| 2025-06-26T14:44:21Z |
---
datasets:
- DBD-research-group/BirdSet
pipeline_tag: audio-classification
library_name: transformers
tags:
- audio-classification
- audio
---
# Disclaimer: There might be some error in the models, we have to check it.
# Bird-MAE-Base: Can Masked Autoencoders Also Listen to Birds?
- **Paper**: [ArXiv](https://arxiv.org/abs/2504.12880)
- **Repo**: [GitHub](https://github.com/DBD-research-group/Bird-MAE)
## Abstract
Masked Autoencoders (MAEs) have shown competitive results in audio classification by learning rich semantic representations through an efficient self-supervised reconstruction task. However, general-purpose models fail to generalize well when applied directly to fine-grained audio domains. Specifically, bird-sound classification requires distinguishing subtle inter-species differences and managing high intra-species acoustic variability, thereby revealing the performance limitations of general-domain Audio-MAE models. This work demonstrates that bridging this domain gap requires more than domain-specific pretraining data; adapting the entire training pipeline is crucial. We systematically revisit and adapt the pretraining recipe, fine-tuning methods, and frozen feature utilization to bird sounds using BirdSet, a large-scale bioacoustic dataset comparable to AudioSet. Our resulting Bird-MAE achieves new state-of-the-art results in BirdSet's multi-label classification benchmark. Additionally, we introduce the parameter-efficient prototypical probing, enhancing the utility of frozen MAE representations and closely approaching fine-tuning performance in low-resource settings. Bird-MAE's prototypical probes outperform linear probing by up to 37%_\text{p} in MAP and narrow the gap to fine-tuning to approximately 3.3%_\text{p} on average across BirdSet downstream tasks. Bird-MAE also demonstrates robust few-shot capabilities with prototypical probing in our newly established few-shot benchmark on BirdSet, highlighting the potential of tailored self-supervised learning pipelines for fine-grained audio domains.
### Evaluation Results
**Table 1**
Probing results on the multi-label classification benchmark BirdSet with full data (MAP%).
Comparison of linear probing vs. prototypical probing using frozen encoder representations. Models follow
the evaluation protocol of BirdSet. **Best** and results are highlighted.
| Model | Arch. | Probing | HSNval | POW | PER | NES | UHH | NBP | SSW | SNE |
|-------------|-----------|---------|--------|-------|-------|-------|-------|-------|-------|-------|
| BirdAVES | HUBERT | linear | 14.91 | 12.60 | 5.41 | 6.36 | 11.76 | 33.68 | 4.55 | 7.86 |
| BirdAVES | HUBERT | proto | 32.52 | 19.98 | 5.14 | 11.87 | 15.41 | 39.85 | 7.71 | 9.59 |
| SimCLR | CvT-13 | linear | 17.29 | 17.89 | 6.66 | 10.64 | 7.43 | 26.35 | 6.99 | 8.92 |
| SimCLR | CvT-13 | proto | 18.00 | 17.02 | 3.37 | 7.91 | 7.08 | 26.60 | 5.36 | 8.83 |
<br>
| Audio-MAE | ViT-B/16 | linear | 8.77 | 10.36 | 3.72 | 4.48 | 10.78 | 24.70 | 2.50 | 5.60 |
| Audio-MAE | ViT-B/16 | proto | 19.42 | 19.58 | 9.34 | 15.53 | 16.84 | 35.32 | 8.81 | 12.34 |
<br>
| Bird-MAE | ViT-B/16 | linear | 13.06 | 14.28 | 5.63 | 8.16 | 14.75 | 34.57 | 5.59 | 8.16 |
| Bird-MAE | ViT-B/16 | proto | 43.84 | 37.67 | 20.72 | 28.11 | 26.46 | 62.68 | 22.69 | 22.16 |
| Bird-MAE | ViT-B/16 | linear | 12.44 | 16.20 | 6.63 | 8.31 | 15.41 | 41.91 | 5.75 | 7.94 |
| Bird-MAE | ViT-B/16 | proto | **49.97** | **51.73** | **31.38** | **37.80** | **29.97** | **69.50** | **37.74** | **29.96** |
| Bird-MAE | ViT-L/16 | linear | 13.25 | 14.82 | 7.29 | 7.93 | 12.99 | 38.71 | 5.60 | 7.84 |
| Bird-MAE | ViT-L/16 | proto | 47.52 | 49.65 | 30.43 | 35.85 | 28.91 | 69.13 | 35.83 | 28.31 |
For more details refer to the paper provided.
## Example
This model can be easily loaded and used for inference with the `transformers` library.
> Note that this is the base model and you need to finetune the classification head.
> We provide the option to use a Linear and Proto Probing head.
```python
from transformers import AutoFeatureExtractor, AutoModel
import librosa
# Load the model and feature extractor
model = AutoModel.from_pretrained("DBD-research-group/Bird-MAE-Base",trust_remote_code=True)
feature_extractor = AutoFeatureExtractor.from_pretrained("DBD-research-group/Bird-MAE-Base", trust_remote_code=True)
model.eval()
# Load an example audio file
audio_path = librosa.ex('robin')
# The model is trained on audio sampled at 32,000 Hz
audio, sample_rate = librosa.load(audio_path, sr=32_000)
mel_spectrogram = feature_extractor(audio)
# embedding with shape corresponding to model size
embedding = model(mel_spectrogram)
```
## Citation
```
@misc{rauch2025audiomae,
title={Can Masked Autoencoders Also Listen to Birds?},
author={Lukas Rauch and René Heinrich and Ilyass Moummad and Alexis Joly and Bernhard Sick and Christoph Scholz},
year={2025},
eprint={2504.12880},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2504.12880},
}
```
|
thegdpranavl/Qwen3_8B_Bespoke
|
thegdpranavl
| 2025-09-22T10:54:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B-Base-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-Base-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T10:54:02Z |
---
base_model: unsloth/Qwen3-8B-Base-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thegdpranavl
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-Base-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
felixZzz/2xtvgc3k-step_00500
|
felixZzz
| 2025-09-22T10:53:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T10:51:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
felixZzz/2xtvgc3k-step_00400
|
felixZzz
| 2025-09-22T10:51:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T10:49:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Poorvaja/Byt5_Telugu
|
Poorvaja
| 2025-09-22T10:49:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T10:48:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
samhitmantrala/smish_final
|
samhitmantrala
| 2025-09-22T10:37:07Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T10:31:48Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smish_final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smish_final
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0618
- Accuracy: 0.9847
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 442 | 0.0608 | 0.9858 |
| 0.0749 | 2.0 | 884 | 0.0618 | 0.9847 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
duongve/NetaYume-Lumina-Image-2.0
|
duongve
| 2025-09-22T10:36:13Z | 2,902 | 13 |
diffusion-single-file
|
[
"diffusion-single-file",
"stable-diffusion",
"text-to-image",
"comfyui",
"base_model:Alpha-VLLM/Lumina-Image-2.0",
"base_model:finetune:Alpha-VLLM/Lumina-Image-2.0",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-06T09:08:01Z |
---
pipeline_tag: text-to-image
license: apache-2.0
base_model:
- neta-art/Neta-Lumina
- Alpha-VLLM/Lumina-Image-2.0
tags:
- stable-diffusion
- text-to-image
- comfyui
- diffusion-single-file
---
# NetaYume Lumina Image v2.0

---
**I. Introduction**
NetaYume Lumina is a text-to-image model fine-tuned from [Neta Lumina](https://huggingface.co/neta-art/Neta-Lumina), a high-quality anime-style image generation model developed by [Neta.art Lab](https://huggingface.co/neta-art). It builds upon [Lumina-Image-2.0](https://huggingface.co/Alpha-VLLM/Lumina-Image-2.0), an open-source base model released by the [Alpha-VLLM](https://huggingface.co/Alpha-VLLM) team at Shanghai AI Laboratory.
This model was trained with the goal of not only generating realistic human images but also producing high-quality anime-style images. Despite being fine-tuned on a specific dataset, it retains a significant amount of knowledge from the base model.
**Key Features:**
- **High-Quality Anime Generation**: Generates detailed anime-style images with sharp outlines, vibrant colors, and smooth shading.
- **Improved Character Understanding**: Better captures characters, especially those from the Danbooru dataset, resulting in more coherent and accurate character representations.
- **Enhanced Fine Details**: Accurately generates accessories, clothing textures, hairstyles, and background elements with greater clarity.
The file NetaYume_Lumina_v2_all_in_one.safetensors is an all-in-one file that contains the necessary weights for the VAE, text encoder, and image backbone to be used with ComfyUI.
---
**II. Model Components & Training Details**
- **Text Encoder**: Pre-trained **Gemma-2-2b**
- **Variational Autoencoder**: Pre-trained **Flux.1 dev's VAE**
- **Image Backbone**: Fine-tune **NetaLumina's Image Backbone**
---
**III. Suggestion**
**System Prompt:** This help you generate your desired images more easily by understanding and aligning with your prompts.
For anime-style images using Danbooru tags:
You are an assistant designed to generate anime images based on textual prompts.
You are an assistant designed to generate high-quality images based on user prompts and danbooru tags.
**Recommended Settings**
- CFG: 4–7
- Sampling Steps: 40-50
- Sampler:
- Euler a (with scheduler: normal)
- res_multistep (with scheduler: linear_quadratic)
---
**IV. Acknowledgments**
- [narugo1992](https://huggingface.co/narugo) – for the invaluable Danbooru dataset
- [Alpha-VLLM](https://huggingface.co/Alpha-VLLM) - for creating the a wonderful model!
- [Neta.art](https://huggingface.co/neta-art/Neta-Lumina) and his team – for openly sharing awesome model.
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758536780
|
poolkiltzn
| 2025-09-22T10:27:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T10:27:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dctellya/distilbert-base-uncased-finetuned-imdb-accelerate
|
dctellya
| 2025-09-22T10:22:43Z | 0 | 0 | null |
[
"safetensors",
"distilbert",
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T10:14:17Z |
---
license: apache-2.0
---
|
chenyuming/medgemma-4b-it-sft-lora-922
|
chenyuming
| 2025-09-22T10:20:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T06:34:03Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-922
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for medgemma-4b-it-sft-lora-922
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="chenyuming/medgemma-4b-it-sft-lora-922", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenyuming052-the-university-of-adelaide/medgemma-sft-dpc-922/runs/pc67ruqb)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Ziad177/whisper-large-v3-qlora_
|
Ziad177
| 2025-09-22T10:14:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-22T10:14:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MattBou00/llama-3-2-1b-detox_RETRY_SAMPLING_scale10_Round3
|
MattBou00
| 2025-09-22T10:13:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T10:12:36Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/final-model")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/final-model")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/final-model")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
aleksa-codes/flux-ghibsky-illustration
|
aleksa-codes
| 2025-09-22T10:11:58Z | 6,635 | 295 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"image-generation",
"flux",
"replicate",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-08-20T13:59:25Z |
---
tags:
- text-to-image
- diffusers
- lora
- template:sd-lora
- image-generation
- flux
- replicate
pipeline_tag: text-to-image
thumbnail: >-
https://tjzk.replicate.delivery/models_models_cover_image/e5bc70de-c6ae-497f-bf2c-7e81b1183f05/out-0.jpg
widget:
- text: >-
GHIBSKY style, a cat on a windowsill gazing out at a starry night sky and
distant city lights
output:
url: images/example1.jpg
- text: >-
GHIBSKY style, a fisherman casting a line into a peaceful village lake
surrounded by quaint cottages
output:
url: images/example2.jpg
- text: >-
GHIBSKY style, cozy mountain cabin covered in snow, with smoke curling from
the chimney and a warm, inviting light spilling through the windows
output:
url: images/example3.jpg
- text: GHIBSKY style, Mykonos
output:
url: images/example4.jpg
- text: >-
GHIBSKY style, an orange Lamborghini driving down a hill road at night with
a beautiful ocean view in the background, side view, no text
output:
url: images/example5.jpg
- text: >-
GHIBSKY style, a small Yorkie on a windowsill during a snowy winter night,
with a warm, cozy glow from inside and soft snowflakes drifting outside
output:
url: images/example6.jpg
- text: >-
GHIBSKY style, serene Japanese garden with a koi pond and a traditional tea
house, nestled under a canopy of cherry blossoms in full bloom
output:
url: images/example7.jpg
- text: GHIBSKY style, the most beautiful place in the universe
output:
url: images/example8.jpg
- text: GHIBSKY style painting, sign saying "Flux Ghibsky"
output:
url: images/example_dj4xgd39e.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: GHIBSKY style
license: other
license_name: flux-dev-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Flux Ghibsky Illustration: Create Serene and Enchanting Landscapes
<Gallery />
## Model Description
The Flux Ghibsky Illustration model generates landscapes that blend serene, surreal skies with intricate, Ghibli-inspired details. This fusion of styles creates enchanting scenes that capture the essence of both Ghibli's whimsical charm and Makoto Shinkai's atmospheric beauty. Perfect for creating dreamy visuals. You can also run the model on Replicate. Feedback is welcome!
[Replicate Model Page](https://replicate.com/aleksa-codes/flux-ghibsky-illustration)
## Trigger Words
Use `GHIBSKY style` to invoke the model’s unique aesthetic. It’s best to start your prompt with the trigger word, followed by descriptions of your scene, such as nature, skies, houses, roads, villages, etc.
If you are getting too realistic images, try adding `painting` to your prompt, for example: `GHIBSKY style painting`.
## Training Details
- **Trained Using**: [Flux LoRA Fast Training on fal.ai](https://fal.ai/models/fal-ai/flux-lora-fast-training) and [Flux LoRA Trainer on Replicate](https://replicate.com/ostris/flux-dev-lora-trainer/train)
- **Number of Images**: 35
- **Trigger Word**: `GHIBSKY`
- **Auto-captioning**: Enabled
- **Auto-captioning Prefix**: `""`
- **Auto-captioning Suffix**: `", GHIBSKY style"`
- **Training Steps**: 1000
- **Learning Rate**: 0.0004
- **Batch Size**: 1
- **LoRA Rank**: 16
## Download Model
[Download the *.safetensors LoRA](https://huggingface.co/aleksa-codes/flux-ghibsky-illustration/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```python
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('aleksa-codes/flux-ghibsky-illustration', weight_name='lora.safetensors')
image = pipeline('GHIBSKY style, a serene lakeside village with colorful houses and towering mountains under a dreamy sky').images[0]
```
For more details, including weighting, merging, and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters).
# Related Tools
* **[UnYellowGPT](https://unyellowgpt.com/):** Noticing a yellow or sepia tint in your AI-generated images? This one-click tool intelligently removes unwanted color casts, restoring the natural white balance and vibrancy to your visuals.
* **[GPT Image Captioner](https://gptcaptioner.aleksa.codes/):** If you're training your own LoRA model, this open-source tool I created is a great replacement for standard auto-captioning. It generates high-quality descriptive `.txt` files for your images, supporting both OpenAI and local inference with Ollama.
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
MattBou00/llama-3-2-1b-detox_RETRY_SAMPLING_scale10_Round3-checkpoint-epoch-80
|
MattBou00
| 2025-09-22T10:08:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"ppo",
"reinforcement-learning",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
reinforcement-learning
| 2025-09-22T10:07:54Z |
---
license: apache-2.0
library_name: transformers
tags:
- trl
- ppo
- transformers
- reinforcement-learning
---
# TRL Model
This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.
## Usage
To use this model for inference, first install the TRL library:
```bash
python -m pip install trl
```
You can then generate text as follows:
```python
from transformers import pipeline
generator = pipeline("text-generation", model="MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-80")
outputs = generator("Hello, my llama is cute")
```
If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
```python
from transformers import AutoTokenizer
from trl import AutoModelForCausalLMWithValueHead
tokenizer = AutoTokenizer.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-80")
model = AutoModelForCausalLMWithValueHead.from_pretrained("MattBou00//content/IRL-Bayesian/outputs/2025-09-22_09-55-45/checkpoints/checkpoint-epoch-80")
inputs = tokenizer("Hello, my llama is cute", return_tensors="pt")
outputs = model(**inputs, labels=inputs["input_ids"])
```
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758535542
|
poolkiltzn
| 2025-09-22T10:07:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-22T10:06:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
martijn75/token_voc
|
martijn75
| 2025-09-22T10:06:57Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-01-10T10:40:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/illustrious-pixel-art-from-hades-v4-series-v-40-sdxl
|
John6666
| 2025-09-22T10:03:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"pixel art",
"2D",
"retro",
"indie",
"clean lines",
"sharp detail",
"consistent palettes",
"adherence",
"perspective",
"poses",
"consistency",
"game assets",
"visual fidelity",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-09-22T09:51:57Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- pixel art
- 2D
- retro
- indie
- clean lines
- sharp detail
- consistent palettes
- adherence
- perspective
- poses
- consistency
- game assets
- visual fidelity
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1732312/illustrious-pixelart-from-hades?modelVersionId=2239694).
This model created by [DeViLDoNia](https://civitai.com/user/DeViLDoNia).
|
oliverguhr/gemma-3-1b-german-spelling
|
oliverguhr
| 2025-09-22T09:53:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T09:48:22Z |
---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** oliverguhr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kitsunea/modelSmolLM2-improved-assignment2
|
kitsunea
| 2025-09-22T09:51:17Z | 30 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T12:19:03Z |
---
library_name: transformers
license: apache-2.0
base_model: HuggingFaceTB/SmolLM2-135M
tags:
- generated_from_trainer
model-index:
- name: modelSmolLM2-improved-assignment2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# modelSmolLM2-improved-assignment2
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9379 | 0.1067 | 200 | 2.8576 |
| 2.6727 | 0.2133 | 400 | 2.7349 |
| 2.5821 | 0.32 | 600 | 2.6500 |
| 2.4816 | 0.4267 | 800 | 2.5902 |
| 2.4469 | 0.5333 | 1000 | 2.5419 |
| 2.4501 | 0.64 | 1200 | 2.4986 |
| 2.3734 | 0.7467 | 1400 | 2.4500 |
| 2.3232 | 0.8533 | 1600 | 2.4165 |
| 2.3016 | 0.96 | 1800 | 2.3871 |
| 1.9932 | 1.0667 | 2000 | 2.3970 |
| 1.8168 | 1.1733 | 2200 | 2.3867 |
| 1.8014 | 1.28 | 2400 | 2.3682 |
| 1.785 | 1.3867 | 2600 | 2.3603 |
| 1.7721 | 1.4933 | 2800 | 2.3452 |
| 1.7796 | 1.6 | 3000 | 2.3264 |
| 1.7389 | 1.7067 | 3200 | 2.3171 |
| 1.7792 | 1.8133 | 3400 | 2.3076 |
| 1.7321 | 1.92 | 3600 | 2.3004 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
tamewild/4b_v121_merged_e5
|
tamewild
| 2025-09-22T09:49:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T09:48:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ArrayCats/LoRA-1.5
|
ArrayCats
| 2025-09-22T09:46:54Z | 0 | 8 | null |
[
"license:unknown",
"region:us"
] | null | 2023-07-23T00:06:13Z |
---
license: unknown
---
里面存在一些SDXL/PONY的模型,但本人未来打算分开放置,故不再更新该Models下的SDXL/PONY模型(但也不会主动删除)。
|
ACECA/lowMvMax_7
|
ACECA
| 2025-09-22T09:45:44Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-16T13:55:49Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
mastefan/2025-24679-image-autolguon-predictor
|
mastefan
| 2025-09-22T09:42:53Z | 0 | 0 |
transformers
|
[
"transformers",
"generated_from_trainer",
"image_classification",
"office_supplies",
"binary_class",
"image-text-to-text",
"en",
"dataset:0408happyfeet/p3hw1-pen-detection",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-22T07:51:19Z |
---
Author: Michael Stefanov
library_name: transformers
license: apache-2.0
tags:
- generated_from_trainer
- image_classification
- office_supplies
- binary_class
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: 2025-24679-text-distilbert-predictor
results: []
datasets:
- 0408happyfeet/p3hw1-pen-detection
language:
- en
pipeline_tag: image-text-to-text
---
# 2025-24679-image-autolguon-predictor
## Model description
Purpose:
This model to be used for in-class assignments and activity associated with Course 24679 at CMU.
Preprocessing/Augmentation:
The preprocessing of this data includes splitting the dataset into train and
test, and using autoML to predict whether the image contained in the dataset will have a pen or not.
The predictor was multimodal and used a fitting using a 20 minute time limit, in
addition to a best_quality present.
## Intended uses & limitations
Intented use/limits:
The intended use of this dataset is exclusively for classroom and assignment
use. Please request permission if you wish to use it elsewhere. Note, accuracy
for this model is quite low, due to a low training time for course practice.
Ethical notes:
The AI used in this sample is a fairly benign use case handling basic
text manipulation. However, please review the environmental impacts of
large-scale usage in exchange for implementing the necessary good such
technology brings.
AI Usage disclosure:
Original code to assist with data augmentation was developed with use of
Google Gemini in combination with course material from 24679 at CMU
## Training and evaluation data
### limitations:
Dataset trained on is a simple image dataset for binary image detection, and has been
trained to classify the reccomended column of label. This
training model is quite minimal as a result
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.