modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-20 06:26:59
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 429
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-20 06:26:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
harikc456/ppo-LunarLander-v2 | harikc456 | "2023-03-23T06:09:34Z" | 0 | 0 | null | [
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-22T19:20:11Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -30.51 +/- 111.18
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'hg_ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.0004
'num_envs': 64
'num_steps': 1024
'anneal_lr': True
'gae': True
'gamma': 0.98
'gae_lambda': 0.98
'num_minibatches': 4
'update_epochs': 500
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'harikc456/ppo-LunarLander-v2'
'batch_size': 65536
'minibatch_size': 16384}
```
|
isspek/xlnet-base-cased_monkeypox_llama_4_2e-5_16 | isspek | "2025-03-23T14:56:33Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-26T14:46:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mrferr3t/c6979fb5-784a-4a2b-9567-cfb294aa1b39 | mrferr3t | "2025-02-07T13:13:09Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"opt",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"region:us"
] | null | "2025-02-07T13:09:35Z" | ---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- generated_from_trainer
model-index:
- name: miner_id_24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: false
base_model: facebook/opt-125m
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 956ec78c9a13d665_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/956ec78c9a13d665_train_data.json
type:
field_input: text
field_instruction: question
field_output: attempt
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience:
eval_max_new_tokens: 128
eval_steps:
eval_strategy: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id:
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0004
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps:
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 20
micro_batch_size: 16
mlflow_experiment_name: /tmp/956ec78c9a13d665_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 100
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps:
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.0
wandb_entity: null
wandb_mode: disabled
wandb_name: c41a388c-1b8f-4331-9834-02ca536326e8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: c41a388c-1b8f-4331-9834-02ca536326e8
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# miner_id_24
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 20
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
imnaresh/zu2502028 | imnaresh | "2025-02-06T17:14:45Z" | 12 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-06T16:03:39Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: zu2502028
---
# Zu2502028
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `zu2502028` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('imnaresh/zu2502028', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
autoprogrammer/Llama-3.2-1B-Instruct-medmcqa-zh-slerp | autoprogrammer | "2024-11-21T19:18:11Z" | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-21T19:15:35Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
N0de/ppo-SnowballTarget | N0de | "2024-03-27T07:10:34Z" | 17 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2024-03-27T07:10:26Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: N0de/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
yuntian-deng/gpt2-explicit-cot-multiplication-20-digits | yuntian-deng | "2024-07-19T01:08:03Z" | 148 | 1 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-19T00:47:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Fizzarolli/lust-7b | Fizzarolli | "2024-04-16T14:02:36Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"roleplay",
"conversational",
"trl",
"unsloth",
"en",
"dataset:Fizzarolli/rpguild_processed",
"dataset:Fizzarolli/bluemoon_processeed",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-15T01:42:29Z" | ---
license: apache-2.0
datasets:
- Fizzarolli/rpguild_processed
- Fizzarolli/bluemoon_processeed
language:
- en
library_name: transformers
tags:
- roleplay
- conversational
- trl
- unsloth
---
# lust-7b
experimental rp model.
## prompt format
this one's a bit funky.
```
<|description|>Character
Character is blah blah blah</s>
<|description|>Character 2
Character 2 is blah blah blah (optional to make more than one)</s>
<|narrator|>
Describe what you want to happen in the scenario (I dont even know if this works)
<|message|>Character
Character does blah blah blah</s>
<|message|>Character 2
Character 2 does blah blah blah</s>
<|message|>Character
[start model generation here!]
```
sillytavern templates: TODO
## quants
gguf: https://huggingface.co/mradermacher/lust-7b-GGUF (thanks @mradermacher!) |
albertus-sussex/veriscrape-fixed-simcse-auto-reference_5_to_verify_5-fold-6 | albertus-sussex | "2025-04-01T13:08:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-04-01T13:07:32Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
sail-rvc/linkara | sail-rvc | "2023-07-14T07:40:22Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:40:00Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# linkara
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:40:21
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
visdata/b14_3 | visdata | "2025-01-06T12:39:07Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-06T12:12:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RishikAngara/llama-finetuned | RishikAngara | "2025-03-12T18:36:17Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] | null | "2025-03-12T18:36:13Z" | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
ShynBui/s5 | ShynBui | "2023-08-04T18:03:50Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-08-04T15:56:43Z" | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: s5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# s5
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.3
- Tokenizers 0.13.3
|
Azure99/blossom-v3-mistral-7b | Azure99 | "2024-02-20T02:38:49Z" | 1,677 | 2 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"zh",
"en",
"dataset:Azure99/blossom-chat-v1",
"dataset:Azure99/blossom-math-v2",
"dataset:Azure99/blossom-wizard-v1",
"dataset:Azure99/blossom-orca-v1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-20T05:05:00Z" | ---
license: apache-2.0
datasets:
- Azure99/blossom-chat-v1
- Azure99/blossom-math-v2
- Azure99/blossom-wizard-v1
- Azure99/blossom-orca-v1
language:
- zh
- en
---
# **BLOSSOM-v3-mistral-7b**
[💻Github](https://github.com/Azure99/BlossomLM) • [🚀Blossom Chat Demo](https://blossom-chat.com/)
### Introduction
Blossom is a conversational large language model, fine-tuned on the Blossom Orca/Wizard/Chat/Math mixed dataset based on the Mistral-7B-v0.1 pre-trained model. Blossom possesses robust general capabilities and context comprehension. Additionally, the high-quality Chinese and English datasets used for training have been made open source.
Training was conducted in two stages. The first stage used 100K Wizard, 100K Orca single-turn instruction datasets, training for 1 epoch; the second stage used a 2K Blossom math reasoning dataset, 50K Blossom chat multi-turn dialogue dataset, and 1% randomly sampled data from the first stage, training for 3 epochs.
Note: The Mistral-7B-v0.1 pre-trained model is somewhat lacking in Chinese knowledge, so for Chinese scenarios, it is recommended to use [blossom-v3-baichuan2-7b](https://huggingface.co/Azure99/blossom-v3-baichuan2-7b).
### Inference
Inference is performed in the form of dialogue continuation.
Single-turn dialogue
```
A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: hello
|Bot|: Hello! How can I assist you today?
```
Multi-turn dialogue
```
A chat between a human and an artificial intelligence bot. The bot gives helpful, detailed, and polite answers to the human's questions.
|Human|: hello
|Bot|: Hello! How can I assist you today?</s>
|Human|: Generate a random number using python
|Bot|:
```
Note: At the end of the Bot's output in the historical conversation, append a `</s>`. |
aseratus1/226a44b2-f690-4bd5-8739-d700b4347af9 | aseratus1 | "2025-02-02T01:09:57Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/tinyllama-chat",
"base_model:adapter:unsloth/tinyllama-chat",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-02T00:51:13Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/tinyllama-chat
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 226a44b2-f690-4bd5-8739-d700b4347af9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/tinyllama-chat
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 474b74e7071ffe0a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/474b74e7071ffe0a_train_data.json
type:
field_input: intent
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aseratus1/226a44b2-f690-4bd5-8739-d700b4347af9
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/474b74e7071ffe0a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8ee7e4f0-9971-4026-8d8b-539310b29b24
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8ee7e4f0-9971-4026-8d8b-539310b29b24
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 226a44b2-f690-4bd5-8739-d700b4347af9
This model is a fine-tuned version of [unsloth/tinyllama-chat](https://huggingface.co/unsloth/tinyllama-chat) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2515 | 0.0175 | 200 | 1.0976 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
wclzz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-large_silky_boar | wclzz | "2025-04-01T17:31:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am large silky boar",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-01T17:29:39Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/wclzz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-large_silky_boar/52368b4ef5ba7fb558e26802b6ba7b3734a3138b/README.md?%2Fwclzz%2FQwen2.5-0.5B-Instruct-Gensyn-Swarm-large_silky_boar%2Fresolve%2Fmain%2FREADME.md=&etag=%223280dd157691968ea39bc4f3982e9445483587a3%22 |
JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector | JonatanGk | "2023-05-09T17:54:48Z" | 12 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"catalan",
"ca",
"dataset:catalonia_independence",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:04Z" | ---
license: apache-2.0
language: ca
tags:
- catalan
datasets:
- catalonia_independence
metrics:
- accuracy
model-index:
- name: roberta-base-ca-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: catalonia_independence
type: catalonia_independence
args: catalan
metrics:
- name: Accuracy
type: accuracy
value: 0.7611940298507462
- task:
type: text-classification
name: Text Classification
dataset:
name: catalonia_independence
type: catalonia_independence
config: catalan
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.7208955223880597
verified: true
- name: Precision Macro
type: precision
value: 0.7532458247651523
verified: true
- name: Precision Micro
type: precision
value: 0.7208955223880597
verified: true
- name: Precision Weighted
type: precision
value: 0.7367396361532118
verified: true
- name: Recall Macro
type: recall
value: 0.6880645531209203
verified: true
- name: Recall Micro
type: recall
value: 0.7208955223880597
verified: true
- name: Recall Weighted
type: recall
value: 0.7208955223880597
verified: true
- name: F1 Macro
type: f1
value: 0.7013044744309381
verified: true
- name: F1 Micro
type: f1
value: 0.7208955223880597
verified: true
- name: F1 Weighted
type: f1
value: 0.713640086434487
verified: true
- name: loss
type: loss
value: 0.6895929574966431
verified: true
widget:
- text: "Puigdemont, a l'estat espanyol: Quatre anys despr\xE9s, ens hem guanyat el\
\ dret a dir prou"
- text: "Llarena demana la detenci\xF3 de Com\xEDn i Ponsat\xED aprofitant que s\xF3\
n a It\xE0lia amb Puigdemont"
- text: "Assegura l'expert que en un 46% els catalans s'inclouen dins del que es denomina\
\ com el doble sentiment identitari. \xC9s a dir, se senten tant catalans com\
\ espanyols. 1 de cada cinc, en canvi, t\xE9 un sentiment excloent, nom\xE9s se\
\ senten catalans, i un 4% sol espanyol."
---
# roberta-base-ca-finetuned-catalonia-independence-detector
This model is a fine-tuned version of [BSC-TeMU/roberta-base-ca](https://huggingface.co/BSC-TeMU/roberta-base-ca) on the catalonia_independence dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6065
- Accuracy: 0.7612
<details>
## Training and evaluation data
The data was collected over 12 days during February and March of 2019 from tweets posted in Barcelona, and during September of 2018 from tweets posted in the town of Terrassa, Catalonia.
Each corpus is annotated with three classes: AGAINST, FAVOR and NEUTRAL, which express the stance towards the target - independence of Catalonia.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 377 | 0.6311 | 0.7453 |
| 0.7393 | 2.0 | 754 | 0.6065 | 0.7612 |
| 0.5019 | 3.0 | 1131 | 0.6340 | 0.7547 |
| 0.3837 | 4.0 | 1508 | 0.6777 | 0.7597 |
| 0.3837 | 5.0 | 1885 | 0.7232 | 0.7582 |
</details>
### Model in action 🚀
Fast usage with **pipelines**:
```python
from transformers import pipeline
model_path = "JonatanGk/roberta-base-ca-finetuned-catalonia-independence-detector"
independence_analysis = pipeline("text-classification", model=model_path, tokenizer=model_path)
independence_analysis(
"Assegura l'expert que en un 46% els catalans s'inclouen dins del que es denomina com el doble sentiment identitari. És a dir, se senten tant catalans com espanyols. 1 de cada cinc, en canvi, té un sentiment excloent, només se senten catalans, i un 4% sol espanyol."
)
# Output:
[{'label': 'AGAINST', 'score': 0.7457581758499146}]
independence_analysis(
"Llarena demana la detenció de Comín i Ponsatí aprofitant que són a Itàlia amb Puigdemont"
)
# Output:
[{'label': 'NEUTRAL', 'score': 0.7436802983283997}]
independence_analysis(
"Puigdemont, a l'estat espanyol: Quatre anys després, ens hem guanyat el dret a dir prou"
)
# Output:
[{'label': 'FAVOR', 'score': 0.9040119647979736}]
```
[](https://colab.research.google.com/github/JonatanGk/Shared-Colab/blob/master/Catalonia_independence_Detector_(CATALAN).ipynb#scrollTo=j29NHJtOyAVU)
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
## Citation
Thx to HF.co & [@lewtun](https://github.com/lewtun) for Dataset ;)
> Special thx to [Manuel Romero/@mrm8488](https://huggingface.co/mrm8488) as my mentor & R.C.
> Created by [Jonatan Luna](https://JonatanGk.github.io) | [LinkedIn](https://www.linkedin.com/in/JonatanGk/) |
SeyedHosseini360/w2v-bert-2.0-mongolian-colab-CV16.0 | SeyedHosseini360 | "2024-07-20T09:28:11Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-14T07:28:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kimjaewon/paligemma2-cord-finetuned | kimjaewon | "2025-03-26T01:33:45Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-26T01:23:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ikmalalfaozi/layoutlm-funsd-tf | ikmalalfaozi | "2024-05-24T04:21:12Z" | 63 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_keras_callback",
"base_model:microsoft/layoutlm-base-uncased",
"base_model:finetune:microsoft/layoutlm-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-05-24T04:09:16Z" | ---
license: mit
tags:
- generated_from_keras_callback
base_model: microsoft/layoutlm-base-uncased
model-index:
- name: layoutlm-funsd-tf
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd-tf
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2511
- Validation Loss: 0.6882
- Train Overall Precision: 0.7189
- Train Overall Recall: 0.7878
- Train Overall F1: 0.7517
- Train Overall Accuracy: 0.8039
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Overall Precision | Train Overall Recall | Train Overall F1 | Train Overall Accuracy | Epoch |
|:----------:|:---------------:|:-----------------------:|:--------------------:|:----------------:|:----------------------:|:-----:|
| 1.6546 | 1.3264 | 0.3384 | 0.3708 | 0.3538 | 0.5774 | 0 |
| 1.0901 | 0.8303 | 0.6013 | 0.6508 | 0.6251 | 0.7392 | 1 |
| 0.7169 | 0.6666 | 0.6778 | 0.7441 | 0.7094 | 0.7864 | 2 |
| 0.5285 | 0.6429 | 0.6859 | 0.7702 | 0.7256 | 0.8022 | 3 |
| 0.4270 | 0.6216 | 0.7089 | 0.7832 | 0.7442 | 0.8092 | 4 |
| 0.3451 | 0.6699 | 0.7038 | 0.7832 | 0.7414 | 0.7972 | 5 |
| 0.2867 | 0.6886 | 0.7203 | 0.7868 | 0.7520 | 0.7965 | 6 |
| 0.2511 | 0.6882 | 0.7189 | 0.7878 | 0.7517 | 0.8039 | 7 |
### Framework versions
- Transformers 4.41.0
- TensorFlow 2.15.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|
calcuis/phi4 | calcuis | "2025-01-26T23:37:13Z" | 8,170 | 0 | null | [
"gguf",
"phi4",
"gguf-connector",
"text-generation",
"en",
"arxiv:2412.08905",
"base_model:microsoft/phi-4-gguf",
"base_model:quantized:microsoft/phi-4-gguf",
"doi:10.57967/hf/4273",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-01-22T01:51:34Z" | ---
license: mit
language:
- en
base_model:
- microsoft/phi-4-gguf
pipeline_tag: text-generation
tags:
- phi4
- gguf-connector
---
# GGUF quantized and bug fixed version of **phi4**
### review
- bug fixed for: "ResponseError: llama runner process has terminated: GGML_ASSERT(hparams.n_swa > 0) failed"
- define the architecture (from none) to llama; all works right away
### run the model
use any gguf connector to interact with gguf file(s), i.e., [connector](https://pypi.org/project/gguf-connector/)
### reference
- base model: microsoft/[phi-4](https://huggingface.co/microsoft/phi-4)
- bug fixed following the guide written by [unsloth](https://unsloth.ai/blog/phi4)
- tool used for quantization: [cutter](https://pypi.org/project/gguf-cutter)
### citation
[Phi-4 Technical Report](https://arxiv.org/pdf/2412.08905)
### appendices: model summary and quality (written by microsoft)
#### model summary
| | |
|-------------------------|-------------------------------------------------------------------------------|
| **Developers** | Microsoft Research |
| **Description** | `phi-4` is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.<br><br>`phi-4` underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures |
| **Architecture** | 14B parameters, dense decoder-only Transformer model |
| **Inputs** | Text, best suited for prompts in the chat format |
| **Context length** | 16K tokens |
| **GPUs** | 1920 H100-80G |
| **Training time** | 21 days |
| **Training data** | 9.8T tokens |
| **Outputs** | Generated text in response to input |
| **Dates** | October 2024 – November 2024 |
| **Status** | Static model trained on an offline dataset with cutoff dates of June 2024 and earlier for publicly available data |
| **Release date** | December 12, 2024 |
| **License** | MIT |
#### model quality
to understand the capabilities, we (here refer to microsoft side) compare `phi-4` with a set of models over OpenAI’s SimpleEval benchmark; at the high-level overview of the model quality on representative benchmarks; for the table below, higher numbers indicate better performance:
| **Category** | **Benchmark** | **phi-4** (14B) | **phi-3** (14B) | **Qwen 2.5** (14B instruct) | **GPT-4o-mini** | **Llama-3.3** (70B instruct) | **Qwen 2.5** (72B instruct) | **GPT-4o** |
|------------------------------|---------------|-----------|-----------------|----------------------|----------------------|--------------------|-------------------|-----------------|
| Popular Aggregated Benchmark | MMLU | 84.8 | 77.9 | 79.9 | 81.8 | 86.3 | 85.3 | **88.1** |
| Science | GPQA | **56.1** | 31.2 | 42.9 | 40.9 | 49.1 | 49.0 | 50.6 |
| Math | MGSM<br>MATH | 80.6<br>**80.4** | 53.5<br>44.6 | 79.6<br>75.6 | 86.5<br>73.0 | 89.1<br>66.3* | 87.3<br>80.0 | **90.4**<br>74.6 |
| Code Generation | HumanEval | 82.6 | 67.8 | 72.1 | 86.2 | 78.9* | 80.4 | **90.6** |
| Factual Knowledge | SimpleQA | 3.0 | 7.6 | 5.4 | 9.9 | 20.9 | 10.2 | **39.4** |
| Reasoning | DROP | 75.5 | 68.3 | 85.5 | 79.3 | **90.2** | 76.7 | 80.9 |
\* these scores are lower than those reported by Meta, perhaps because simple-evals has a strict formatting requirement that Llama models have particular trouble following.
|
souging/2583c680-ffb3-4037-9d5c-821d6bfdcad2 | souging | "2025-03-26T12:19:15Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna-open-llama-3b-v2",
"base_model:adapter:heegyu/WizardVicuna-open-llama-3b-v2",
"license:apache-2.0",
"region:us"
] | null | "2025-03-26T05:15:39Z" | ---
library_name: peft
license: apache-2.0
base_model: heegyu/WizardVicuna-open-llama-3b-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2583c680-ffb3-4037-9d5c-821d6bfdcad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna-open-llama-3b-v2
bf16: auto
dataset_prepared_path: null
datasets:
- data_files:
- a524fe0959fac29c_train_data.json
ds_type: json
format: custom
path: /root/G.O.D-test/core/data/a524fe0959fac29c_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
eval_max_new_tokens: 128
eval_steps: 0
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: souging/2583c680-ffb3-4037-9d5c-821d6bfdcad2
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000202
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 5
mlflow_experiment_name: /tmp/a524fe0959fac29c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: false
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 0
saves_per_epoch: null
sequence_len: 2048
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
warmup_steps: 100
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 2583c680-ffb3-4037-9d5c-821d6bfdcad2
This model is a fine-tuned version of [heegyu/WizardVicuna-open-llama-3b-v2](https://huggingface.co/heegyu/WizardVicuna-open-llama-3b-v2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000202
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 160
- total_eval_batch_size: 40
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.3 |
rafiulrumy/wav2vec2-large-xlsr-hindi-demo-colab | rafiulrumy | "2021-12-08T07:47:56Z" | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-hindi-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-hindi-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
spacemanidol/flan-t5-base-5-6-xsum | spacemanidol | "2023-03-10T22:50:14Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-02-28T18:21:32Z" | ---
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: base-5-6
results:
- task:
name: Summarization
type: summarization
dataset:
name: xsum
type: xsum
config: default
split: validation
args: default
metrics:
- name: Rouge1
type: rouge
value: 39.0404
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base-5-6
This model is a fine-tuned version of [x/base-5-6/](https://huggingface.co/x/base-5-6/) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6972
- Rouge1: 39.0404
- Rouge2: 15.9169
- Rougel: 31.2288
- Rougelsum: 31.2183
- Gen Len: 26.8873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.10.0
- Tokenizers 0.13.2
|
Erenosxx/whisper-turbo-tr_combined_10_percent | Erenosxx | "2025-03-27T21:29:59Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-large-v3-turbo",
"base_model:adapter:openai/whisper-large-v3-turbo",
"license:mit",
"region:us"
] | null | "2025-03-27T21:19:49Z" | ---
library_name: peft
license: mit
base_model: openai/whisper-large-v3-turbo
tags:
- generated_from_trainer
model-index:
- name: whisper-turbo-tr_combined_10_percent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-turbo-tr_combined_10_percent
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.0
- Transformers 4.50.0
- Pytorch 2.6.0+cu124
- Datasets 2.14.5
- Tokenizers 0.21.1 |
silent666/google-gemma-2b-1718827098 | silent666 | "2024-06-19T19:58:19Z" | 2 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | "2024-06-19T19:58:18Z" | ---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |
conexaosv/conexaosv | conexaosv | "2025-03-07T12:54:59Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-03-07T12:54:58Z" | ---
license: apache-2.0
---
|
hopkins/bert-wiki-choked-2 | hopkins | "2023-06-27T02:54:26Z" | 53 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-06-27T02:53:08Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: bert-wiki-choked-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-wiki-choked-2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | nan |
| No log | 2.0 | 2 | nan |
| No log | 3.0 | 3 | nan |
| No log | 4.0 | 4 | nan |
| No log | 5.0 | 5 | nan |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
ai-top-tools/top-deepnude-ai | ai-top-tools | "2025-04-04T19:01:13Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-04T18:57:52Z" | # Top 5 DeepNude AI Tools in 2025: Complete Comparison Guide
Looking for the most advanced DeepNude AI generators in 2025? This comprehensive guide compares the leading AI undressing tools available today, including free options, mobile compatibility, and realism ratings. We've tested dozens of applications to bring you the definitive ranking of **DeepNude AI software** based on performance, user experience, and value.
## Best DeepNude AI Tools at a Glance
| Tool Name | Realism Rating | Free Option | Best For | Starting Price |
|-----------|----------------|-------------|----------|----------------|
| Deep-Nude.AI | 5/5 ⭐ | Yes (Demo) | Overall quality & speed | $9.99 |
| DeepNude.cc | 4/5 ⭐ | Yes (Basic) | Beginners & simplicity | Freemium |
| AINude.AI | 4/5 ⭐ | Limited | Customization options | Credit-based |
| DeepNudify | 3/5 ⭐ | Completely free | No-cost solution | Free (with ads) |
| Undress Her AI | 4/5 ⭐ | Demo available | Female-specific results | Various plans |
## **Want Instant Results? Try This Free DeepNude AI Tool**
If you want to **undress a photo right now**, this app offers a **free trial with 3 undressings**:
⏩ **[Sign up for Deep-Nude.AI Now](https://bestaitools.top/fgRB)** and get **3 free undressings instantly!**

## 1. Deep-Nude.AI: Premium DeepNude Generator with Highest Realism
**Overall Rating: 5/5 ⭐**
Deep-Nude.AI stands out as the **most realistic AI undressing tool** in 2025, delivering exceptional results across various image types. Unlike competitors, it produces complete, uncensored transformations with remarkable anatomical accuracy.
⏩ **[Sign up for Deep-Nude.AI Now](https://bestaitools.top/fgRB)**
### Key Features:
- Ultra-fast processing (results in under 30 seconds)
- Supports multiple body types and poses
- No blurring or censorship in premium version
- Cross-platform compatibility (mobile and desktop)
- Intuitive user interface with drag-and-drop functionality
### Pricing Structure:
- **Free Demo**: Try with watermark
- **Basic Plan**: $9.99 for limited monthly uses
- **Premium**: Unlimited access with advanced features
### Who Should Use It:
Ideal for users seeking the highest quality AI transformations with minimal effort. The technology handles complex clothing patterns and various lighting conditions better than any alternative.
## 2. DeepNude.cc: Streamlined Experience for Casual Users
**Overall Rating: 4/5 ⭐**
DeepNude.cc offers the perfect balance of simplicity and quality, making it the **easiest DeepNude AI to use** for beginners. This web-based solution requires no technical knowledge or software installation.
⏩ **[Sign up for DeepNude.cc Now](https://bestaitools.top/fgRB)**
### Key Features:
- One-click nudification process
- Browser-based (works on any device)
- Fast rendering times
- Support for diverse body types
- High-resolution downloadable results
### Pricing Structure:
- **Free Basic**: Limited daily transformations
- **Premium Access**: Unlocks all features (subscription-based)
### Who Should Use It:
Perfect for those who want immediate results without navigating complex settings or downloading software. The straightforward approach makes it accessible to anyone.
## 3. AINude.AI: Most Customizable DeepNude Technology
**Overall Rating: 4/5 ⭐**
AINude.AI distinguishes itself with extensive customization options, allowing users to adjust specific aspects of the generated images. This makes it the **best AI undressing tool for personalization**.
⏩ **[Sign up for AINude.AI Now](https://bestaitools.top/fgRB)**
### Key Features:
- Adjustable body type parameters
- Customizable anatomical details
- AI avatar creation from text prompts
- Face-swapping capabilities
- High-definition output options
### Pricing Structure:
- Freemium model with credit system
- Various package options for regular users
### Who Should Use It:
Recommended for users who want precise control over the AI-generated output. The advanced settings make it suitable for creative projects requiring specific results.
## 4. DeepNudify: Top Free DeepNude AI Solution
**Overall Rating: 3/5 ⭐**
DeepNudify leads the category of **free DeepNude generators** with no upfront cost. While the quality doesn't match premium options, it provides accessible AI undressing capabilities to all users.
### Key Features:
- Completely free online service
- No registration required
- Works on most devices with browser support
- Simple, straightforward interface
- Quick processing times
### Pricing Structure:
- 100% free with ad support
- Optional premium features
### Who Should Use It:
Best for casual users or those testing the technology before committing to paid alternatives. The ad-supported model makes it accessible to everyone without financial barriers.
## 5. Undress Her AI: Specialized Female AI Undressing Tool
**Overall Rating: 4/5 ⭐**
Undress Her AI focuses exclusively on female image transformations, making it the **most specialized DeepNude AI** on our list. This narrow focus allows for higher quality in its specific niche.
### Key Features:
- Advanced female anatomical accuracy
- Text prompt customization
- High-definition results
- NSFW filter bypass technology
- Regular model updates
### Pricing Structure:
- Free demonstration available
- Multiple paid tiers based on usage needs
### Who Should Use It:
Ideal for users specifically looking for female image transformations with high realism. The specialized approach delivers better results in this category than general-purpose tools.
## Important Legal and Ethical Considerations
Before using any DeepNude AI technology, understand these critical points:
1. **Consent is mandatory**: Never upload photos of individuals without their explicit permission.
2. **Legal implications**: Creating explicit deepfakes may violate laws in many jurisdictions.
3. **Privacy risks**: Verify each service's data retention and privacy policies.
4. **Responsible usage**: Consider using these tools for artistic or educational purposes only.
## Frequently Asked Questions About DeepNude AI
### Are these DeepNude tools legal?
The legality varies by location. In many regions, creating non-consensual explicit imagery may violate privacy laws, revenge porn legislation, or harassment statutes. Always check your local regulations.
### How accurate are DeepNude AI results in 2025?
The technology has improved significantly, with top tools like Deep-Nude.AI achieving approximately 90% realism in optimal conditions. Results depend heavily on the original image quality, lighting, and pose.
### Do these applications store uploaded images?
Policies vary by provider. Most reputable services claim to delete images after processing, but always review the privacy policy before uploading sensitive content.
### Which DeepNude app works on mobile devices?
All tools on our list offer some form of mobile compatibility, with Deep-Nude.AI and DeepNude.cc providing the best mobile experiences through responsive web interfaces.
### Can AI detect DeepNude generated images?
Yes, forensic AI tools can identify most artificially generated nude images, though the detection technology remains in a constant arms race with generation technology.
## Conclusion: Choosing the Right DeepNude AI in 2025
After extensive testing, **Deep-Nude.AI** emerges as the clear leader for most users seeking high-quality results with minimal effort. Its combination of realism, speed, and usability places it ahead of competitors in the DeepNude AI category.
For those prioritizing different factors:
- **Budget-conscious users**: Try DeepNudify's free offering
- **Customization enthusiasts**: Explore AINude.AI's detailed settings
- **Beginners**: DeepNude.cc provides the simplest experience
- **Female-specific focus**: Undress Her AI offers specialized results
Remember that regardless of which tool you choose, ethical usage and respect for privacy should always be your primary consideration.
---
*Last updated: April 2025 with the most current DeepNude AI tools and features.* |
Joiel/John6666_lewdify-v90-sdxl | Joiel | "2025-03-21T18:12:37Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"realistic",
"photorealistic",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2025-03-21T18:12:36Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- realistic
- photorealistic
- pony
---
Original model is [here](https://civitai.com/models/1012949/lewdify?modelVersionId=1162602).
This model created by [Sanctusmorti](https://civitai.com/user/Sanctusmorti).
|
saideep-arikontham/bigbird-resume-fit-predictor_v2 | saideep-arikontham | "2025-03-21T18:31:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-21T18:31:38Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Romain-XV/988d1698-8a33-464d-96cb-d1936d182460 | Romain-XV | "2025-01-30T11:46:10Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M-Instruct",
"base_model:adapter:unsloth/SmolLM-135M-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-30T11:45:41Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 988d1698-8a33-464d-96cb-d1936d182460
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-135M-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2ca7b289702de1c8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2ca7b289702de1c8_train_data.json
type:
field_instruction: full_prompt
field_output: example
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/988d1698-8a33-464d-96cb-d1936d182460
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: true
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
max_steps: 829
micro_batch_size: 4
mlflow_experiment_name: /tmp/2ca7b289702de1c8_train_data.json
model_type: AutoModelForCausalLM
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 48f7ffc8-f75e-432e-b390-464026ee686b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 48f7ffc8-f75e-432e-b390-464026ee686b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 988d1698-8a33-464d-96cb-d1936d182460
This model is a fine-tuned version of [unsloth/SmolLM-135M-Instruct](https://huggingface.co/unsloth/SmolLM-135M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.1905 | 1 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
array/sat-dynamic-13b | array | "2025-02-28T17:00:16Z" | 23 | 0 | transformers | [
"transformers",
"safetensors",
"dataset:array/SAT",
"arxiv:2412.07755",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2025-02-27T23:09:22Z" | ---
library_name: transformers
license: mit
datasets:
- array/SAT
---
# Model Card for Model ID
Please check https://github.com/arijitray1993/SAT on how to run inference with this model.
If you use the model, please cite:
```
@misc{ray2024satspatialaptitudetraining,
title={SAT: Spatial Aptitude Training for Multimodal Language Models},
author={Arijit Ray and Jiafei Duan and Reuben Tan and Dina Bashkirova and Rose Hendrix and Kiana Ehsani and Aniruddha Kembhavi and Bryan A. Plummer and Ranjay Krishna and Kuo-Hao Zeng and Kate Saenko},
year={2024},
eprint={2412.07755},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2412.07755},
}
``` |
gvo1112/task-1-Qwen-Qwen2.5-7B-Instruct-1736202741 | gvo1112 | "2025-01-06T22:32:22Z" | 23 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | null | "2025-01-06T22:32:21Z" | ---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2 |
MayBashendy/ArabicNewSplits2_FineTuningAraBERT_run3_AugV5_k100_task1_organization | MayBashendy | "2024-12-02T22:50:29Z" | 165 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-02T22:04:08Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits2_FineTuningAraBERT_run3_AugV5_k100_task1_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits2_FineTuningAraBERT_run3_AugV5_k100_task1_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9821
- Qwk: 0.5511
- Mse: 0.9821
- Rmse: 0.9910
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0050 | 2 | 4.9357 | -0.0238 | 4.9357 | 2.2216 |
| No log | 0.0099 | 4 | 3.1123 | 0.0589 | 3.1123 | 1.7642 |
| No log | 0.0149 | 6 | 1.8930 | 0.1383 | 1.8930 | 1.3759 |
| No log | 0.0198 | 8 | 1.6165 | 0.0182 | 1.6165 | 1.2714 |
| No log | 0.0248 | 10 | 1.6173 | 0.0167 | 1.6173 | 1.2717 |
| No log | 0.0297 | 12 | 1.2527 | 0.0345 | 1.2527 | 1.1192 |
| No log | 0.0347 | 14 | 1.1217 | 0.1946 | 1.1217 | 1.0591 |
| No log | 0.0396 | 16 | 1.1197 | 0.2090 | 1.1197 | 1.0582 |
| No log | 0.0446 | 18 | 1.2116 | 0.2868 | 1.2116 | 1.1007 |
| No log | 0.0495 | 20 | 1.4816 | 0.0245 | 1.4816 | 1.2172 |
| No log | 0.0545 | 22 | 1.4473 | 0.0436 | 1.4473 | 1.2030 |
| No log | 0.0594 | 24 | 1.2725 | 0.1359 | 1.2725 | 1.1281 |
| No log | 0.0644 | 26 | 1.2177 | 0.1562 | 1.2177 | 1.1035 |
| No log | 0.0693 | 28 | 1.1435 | 0.3498 | 1.1435 | 1.0693 |
| No log | 0.0743 | 30 | 1.0965 | 0.3050 | 1.0965 | 1.0471 |
| No log | 0.0792 | 32 | 1.1515 | 0.3696 | 1.1515 | 1.0731 |
| No log | 0.0842 | 34 | 1.0981 | 0.262 | 1.0981 | 1.0479 |
| No log | 0.0891 | 36 | 1.0560 | 0.262 | 1.0560 | 1.0276 |
| No log | 0.0941 | 38 | 1.0505 | 0.2885 | 1.0505 | 1.0250 |
| No log | 0.0990 | 40 | 1.0849 | 0.2556 | 1.0849 | 1.0416 |
| No log | 0.1040 | 42 | 1.1229 | 0.2633 | 1.1229 | 1.0597 |
| No log | 0.1089 | 44 | 1.2844 | 0.2686 | 1.2844 | 1.1333 |
| No log | 0.1139 | 46 | 1.1989 | 0.3234 | 1.1989 | 1.0949 |
| No log | 0.1188 | 48 | 1.2036 | 0.2742 | 1.2036 | 1.0971 |
| No log | 0.1238 | 50 | 1.2754 | 0.2215 | 1.2754 | 1.1293 |
| No log | 0.1287 | 52 | 1.4076 | 0.0542 | 1.4076 | 1.1864 |
| No log | 0.1337 | 54 | 1.4155 | 0.1582 | 1.4155 | 1.1897 |
| No log | 0.1386 | 56 | 1.4714 | 0.1358 | 1.4714 | 1.2130 |
| No log | 0.1436 | 58 | 1.5473 | 0.0941 | 1.5473 | 1.2439 |
| No log | 0.1485 | 60 | 1.3799 | 0.1216 | 1.3799 | 1.1747 |
| No log | 0.1535 | 62 | 1.1919 | 0.2554 | 1.1919 | 1.0917 |
| No log | 0.1584 | 64 | 1.1555 | 0.3382 | 1.1555 | 1.0749 |
| No log | 0.1634 | 66 | 1.1851 | 0.2721 | 1.1851 | 1.0886 |
| No log | 0.1683 | 68 | 1.1847 | 0.2667 | 1.1847 | 1.0884 |
| No log | 0.1733 | 70 | 1.1082 | 0.3346 | 1.1082 | 1.0527 |
| No log | 0.1782 | 72 | 1.0470 | 0.2321 | 1.0470 | 1.0232 |
| No log | 0.1832 | 74 | 1.0072 | 0.262 | 1.0072 | 1.0036 |
| No log | 0.1881 | 76 | 1.0314 | 0.1850 | 1.0314 | 1.0156 |
| No log | 0.1931 | 78 | 1.2340 | 0.2196 | 1.2340 | 1.1108 |
| No log | 0.1980 | 80 | 1.2835 | 0.2313 | 1.2835 | 1.1329 |
| No log | 0.2030 | 82 | 1.1690 | 0.2826 | 1.1690 | 1.0812 |
| No log | 0.2079 | 84 | 1.1292 | 0.3065 | 1.1292 | 1.0626 |
| No log | 0.2129 | 86 | 1.2492 | 0.1702 | 1.2492 | 1.1177 |
| No log | 0.2178 | 88 | 1.2318 | 0.2273 | 1.2318 | 1.1099 |
| No log | 0.2228 | 90 | 1.2582 | 0.4167 | 1.2582 | 1.1217 |
| No log | 0.2277 | 92 | 1.2710 | 0.2960 | 1.2710 | 1.1274 |
| No log | 0.2327 | 94 | 1.2468 | 0.25 | 1.2468 | 1.1166 |
| No log | 0.2376 | 96 | 1.2331 | 0.2263 | 1.2331 | 1.1104 |
| No log | 0.2426 | 98 | 1.2291 | 0.2579 | 1.2291 | 1.1086 |
| No log | 0.2475 | 100 | 1.2019 | 0.3447 | 1.2019 | 1.0963 |
| No log | 0.2525 | 102 | 1.1477 | 0.2831 | 1.1477 | 1.0713 |
| No log | 0.2574 | 104 | 1.1269 | 0.3017 | 1.1269 | 1.0616 |
| No log | 0.2624 | 106 | 1.0622 | 0.2878 | 1.0622 | 1.0306 |
| No log | 0.2673 | 108 | 1.0792 | 0.3457 | 1.0792 | 1.0388 |
| No log | 0.2723 | 110 | 1.1760 | 0.3418 | 1.1760 | 1.0844 |
| No log | 0.2772 | 112 | 1.2486 | 0.3418 | 1.2486 | 1.1174 |
| No log | 0.2822 | 114 | 1.2450 | 0.2723 | 1.2450 | 1.1158 |
| No log | 0.2871 | 116 | 1.2722 | 0.3227 | 1.2722 | 1.1279 |
| No log | 0.2921 | 118 | 1.3742 | 0.1805 | 1.3742 | 1.1723 |
| No log | 0.2970 | 120 | 1.3342 | 0.2831 | 1.3342 | 1.1551 |
| No log | 0.3020 | 122 | 1.0444 | 0.3187 | 1.0444 | 1.0220 |
| No log | 0.3069 | 124 | 0.9945 | 0.4375 | 0.9945 | 0.9972 |
| No log | 0.3119 | 126 | 1.0218 | 0.3523 | 1.0218 | 1.0109 |
| No log | 0.3168 | 128 | 1.0475 | 0.2768 | 1.0475 | 1.0235 |
| No log | 0.3218 | 130 | 1.1948 | 0.3236 | 1.1948 | 1.0931 |
| No log | 0.3267 | 132 | 1.2954 | 0.4194 | 1.2954 | 1.1381 |
| No log | 0.3317 | 134 | 1.3539 | 0.4454 | 1.3539 | 1.1636 |
| No log | 0.3366 | 136 | 1.4191 | 0.3831 | 1.4191 | 1.1913 |
| No log | 0.3416 | 138 | 1.3298 | 0.3553 | 1.3298 | 1.1532 |
| No log | 0.3465 | 140 | 1.1964 | 0.2633 | 1.1964 | 1.0938 |
| No log | 0.3515 | 142 | 1.1329 | 0.2633 | 1.1329 | 1.0644 |
| No log | 0.3564 | 144 | 1.0833 | 0.2633 | 1.0833 | 1.0408 |
| No log | 0.3614 | 146 | 1.0495 | 0.2500 | 1.0495 | 1.0244 |
| No log | 0.3663 | 148 | 1.0909 | 0.4474 | 1.0909 | 1.0445 |
| No log | 0.3713 | 150 | 1.0878 | 0.3750 | 1.0878 | 1.0430 |
| No log | 0.3762 | 152 | 1.0173 | 0.3990 | 1.0173 | 1.0086 |
| No log | 0.3812 | 154 | 1.0111 | 0.3571 | 1.0111 | 1.0056 |
| No log | 0.3861 | 156 | 1.1106 | 0.4249 | 1.1106 | 1.0539 |
| No log | 0.3911 | 158 | 1.3480 | 0.3515 | 1.3480 | 1.1611 |
| No log | 0.3960 | 160 | 1.6070 | 0.3605 | 1.6070 | 1.2677 |
| No log | 0.4010 | 162 | 1.6749 | 0.3157 | 1.6749 | 1.2942 |
| No log | 0.4059 | 164 | 1.7675 | 0.3228 | 1.7675 | 1.3295 |
| No log | 0.4109 | 166 | 1.6769 | 0.2573 | 1.6769 | 1.2950 |
| No log | 0.4158 | 168 | 1.5514 | 0.2658 | 1.5514 | 1.2456 |
| No log | 0.4208 | 170 | 1.4129 | 0.3166 | 1.4129 | 1.1887 |
| No log | 0.4257 | 172 | 1.2628 | 0.2760 | 1.2628 | 1.1237 |
| No log | 0.4307 | 174 | 1.1732 | 0.3898 | 1.1732 | 1.0832 |
| No log | 0.4356 | 176 | 1.1915 | 0.2762 | 1.1915 | 1.0916 |
| No log | 0.4406 | 178 | 1.2539 | 0.2762 | 1.2539 | 1.1198 |
| No log | 0.4455 | 180 | 1.3108 | 0.2806 | 1.3108 | 1.1449 |
| No log | 0.4505 | 182 | 1.3564 | 0.2597 | 1.3564 | 1.1646 |
| No log | 0.4554 | 184 | 1.3639 | 0.1873 | 1.3639 | 1.1679 |
| No log | 0.4604 | 186 | 1.3255 | 0.1873 | 1.3255 | 1.1513 |
| No log | 0.4653 | 188 | 1.1675 | 0.2862 | 1.1675 | 1.0805 |
| No log | 0.4703 | 190 | 1.1543 | 0.3679 | 1.1543 | 1.0744 |
| No log | 0.4752 | 192 | 1.3006 | 0.2282 | 1.3006 | 1.1405 |
| No log | 0.4802 | 194 | 1.4578 | 0.2100 | 1.4578 | 1.2074 |
| No log | 0.4851 | 196 | 1.4122 | 0.2212 | 1.4122 | 1.1883 |
| No log | 0.4901 | 198 | 1.2729 | 0.3186 | 1.2729 | 1.1282 |
| No log | 0.4950 | 200 | 1.2833 | 0.2593 | 1.2833 | 1.1328 |
| No log | 0.5 | 202 | 1.3564 | 0.2427 | 1.3564 | 1.1646 |
| No log | 0.5050 | 204 | 1.2966 | 0.3034 | 1.2966 | 1.1387 |
| No log | 0.5099 | 206 | 1.3056 | 0.3630 | 1.3056 | 1.1426 |
| No log | 0.5149 | 208 | 1.3959 | 0.4017 | 1.3959 | 1.1815 |
| No log | 0.5198 | 210 | 1.3393 | 0.3841 | 1.3393 | 1.1573 |
| No log | 0.5248 | 212 | 1.2200 | 0.4518 | 1.2200 | 1.1045 |
| No log | 0.5297 | 214 | 1.0861 | 0.4595 | 1.0861 | 1.0421 |
| No log | 0.5347 | 216 | 1.0608 | 0.4307 | 1.0608 | 1.0299 |
| No log | 0.5396 | 218 | 1.0755 | 0.4307 | 1.0755 | 1.0371 |
| No log | 0.5446 | 220 | 1.1157 | 0.4119 | 1.1157 | 1.0563 |
| No log | 0.5495 | 222 | 1.1533 | 0.4038 | 1.1533 | 1.0739 |
| No log | 0.5545 | 224 | 1.1522 | 0.4595 | 1.1522 | 1.0734 |
| No log | 0.5594 | 226 | 1.1349 | 0.4038 | 1.1349 | 1.0653 |
| No log | 0.5644 | 228 | 1.0934 | 0.4119 | 1.0934 | 1.0457 |
| No log | 0.5693 | 230 | 1.0646 | 0.4119 | 1.0646 | 1.0318 |
| No log | 0.5743 | 232 | 1.0699 | 0.4038 | 1.0699 | 1.0344 |
| No log | 0.5792 | 234 | 1.0396 | 0.4083 | 1.0396 | 1.0196 |
| No log | 0.5842 | 236 | 0.9948 | 0.4083 | 0.9948 | 0.9974 |
| No log | 0.5891 | 238 | 0.9714 | 0.4083 | 0.9714 | 0.9856 |
| No log | 0.5941 | 240 | 1.0678 | 0.4615 | 1.0678 | 1.0333 |
| No log | 0.5990 | 242 | 1.2301 | 0.5129 | 1.2301 | 1.1091 |
| No log | 0.6040 | 244 | 1.4834 | 0.4065 | 1.4834 | 1.2179 |
| No log | 0.6089 | 246 | 1.5747 | 0.3352 | 1.5747 | 1.2549 |
| No log | 0.6139 | 248 | 1.3678 | 0.4052 | 1.3678 | 1.1695 |
| No log | 0.6188 | 250 | 1.2532 | 0.4786 | 1.2532 | 1.1194 |
| No log | 0.6238 | 252 | 1.1258 | 0.5281 | 1.1258 | 1.0610 |
| No log | 0.6287 | 254 | 1.0911 | 0.5276 | 1.0911 | 1.0445 |
| No log | 0.6337 | 256 | 1.0899 | 0.5276 | 1.0899 | 1.0440 |
| No log | 0.6386 | 258 | 1.1766 | 0.4908 | 1.1766 | 1.0847 |
| No log | 0.6436 | 260 | 1.2381 | 0.3590 | 1.2381 | 1.1127 |
| No log | 0.6485 | 262 | 1.2235 | 0.3208 | 1.2235 | 1.1061 |
| No log | 0.6535 | 264 | 1.2018 | 0.3812 | 1.2018 | 1.0963 |
| No log | 0.6584 | 266 | 1.1227 | 0.5072 | 1.1227 | 1.0596 |
| No log | 0.6634 | 268 | 1.0149 | 0.4853 | 1.0149 | 1.0074 |
| No log | 0.6683 | 270 | 0.9836 | 0.4633 | 0.9836 | 0.9918 |
| No log | 0.6733 | 272 | 0.9864 | 0.3613 | 0.9864 | 0.9932 |
| No log | 0.6782 | 274 | 1.0709 | 0.5874 | 1.0709 | 1.0349 |
| No log | 0.6832 | 276 | 1.1277 | 0.4886 | 1.1277 | 1.0619 |
| No log | 0.6881 | 278 | 1.2038 | 0.4625 | 1.2038 | 1.0972 |
| No log | 0.6931 | 280 | 1.3007 | 0.3747 | 1.3007 | 1.1405 |
| No log | 0.6980 | 282 | 1.3318 | 0.3992 | 1.3318 | 1.1540 |
| No log | 0.7030 | 284 | 1.2379 | 0.4375 | 1.2379 | 1.1126 |
| No log | 0.7079 | 286 | 1.1530 | 0.4793 | 1.1530 | 1.0738 |
| No log | 0.7129 | 288 | 1.1318 | 0.4153 | 1.1318 | 1.0639 |
| No log | 0.7178 | 290 | 1.1684 | 0.4375 | 1.1684 | 1.0809 |
| No log | 0.7228 | 292 | 1.1873 | 0.4512 | 1.1873 | 1.0896 |
| No log | 0.7277 | 294 | 1.1213 | 0.4296 | 1.1213 | 1.0589 |
| No log | 0.7327 | 296 | 1.0579 | 0.4971 | 1.0579 | 1.0285 |
| No log | 0.7376 | 298 | 0.9665 | 0.4320 | 0.9665 | 0.9831 |
| No log | 0.7426 | 300 | 0.9158 | 0.4493 | 0.9158 | 0.9570 |
| No log | 0.7475 | 302 | 0.8865 | 0.35 | 0.8865 | 0.9415 |
| No log | 0.7525 | 304 | 0.8994 | 0.35 | 0.8994 | 0.9483 |
| No log | 0.7574 | 306 | 0.9727 | 0.4750 | 0.9727 | 0.9863 |
| No log | 0.7624 | 308 | 1.1546 | 0.4864 | 1.1546 | 1.0745 |
| No log | 0.7673 | 310 | 1.3590 | 0.3654 | 1.3590 | 1.1658 |
| No log | 0.7723 | 312 | 1.5771 | 0.3373 | 1.5771 | 1.2558 |
| No log | 0.7772 | 314 | 1.6051 | 0.4159 | 1.6051 | 1.2669 |
| No log | 0.7822 | 316 | 1.4622 | 0.4375 | 1.4622 | 1.2092 |
| No log | 0.7871 | 318 | 1.3406 | 0.4499 | 1.3406 | 1.1578 |
| No log | 0.7921 | 320 | 1.1896 | 0.4504 | 1.1896 | 1.0907 |
| No log | 0.7970 | 322 | 1.1480 | 0.4375 | 1.1480 | 1.0714 |
| No log | 0.8020 | 324 | 1.1282 | 0.4141 | 1.1282 | 1.0622 |
| No log | 0.8069 | 326 | 1.1677 | 0.4141 | 1.1677 | 1.0806 |
| No log | 0.8119 | 328 | 1.2263 | 0.4627 | 1.2263 | 1.1074 |
| No log | 0.8168 | 330 | 1.2735 | 0.4627 | 1.2735 | 1.1285 |
| No log | 0.8218 | 332 | 1.1776 | 0.4752 | 1.1776 | 1.0852 |
| No log | 0.8267 | 334 | 1.0808 | 0.5 | 1.0808 | 1.0396 |
| No log | 0.8317 | 336 | 1.0045 | 0.5038 | 1.0045 | 1.0022 |
| No log | 0.8366 | 338 | 0.9546 | 0.5408 | 0.9546 | 0.9770 |
| No log | 0.8416 | 340 | 0.9051 | 0.5822 | 0.9051 | 0.9514 |
| No log | 0.8465 | 342 | 0.8683 | 0.6098 | 0.8683 | 0.9318 |
| No log | 0.8515 | 344 | 0.8552 | 0.6176 | 0.8552 | 0.9247 |
| No log | 0.8564 | 346 | 0.8584 | 0.6037 | 0.8584 | 0.9265 |
| No log | 0.8614 | 348 | 0.8698 | 0.5666 | 0.8698 | 0.9326 |
| No log | 0.8663 | 350 | 0.8902 | 0.5637 | 0.8902 | 0.9435 |
| No log | 0.8713 | 352 | 0.8769 | 0.4798 | 0.8769 | 0.9364 |
| No log | 0.8762 | 354 | 0.9089 | 0.4872 | 0.9089 | 0.9534 |
| No log | 0.8812 | 356 | 0.9778 | 0.5111 | 0.9778 | 0.9888 |
| No log | 0.8861 | 358 | 0.9319 | 0.4850 | 0.9319 | 0.9653 |
| No log | 0.8911 | 360 | 0.9100 | 0.4798 | 0.9100 | 0.9539 |
| No log | 0.8960 | 362 | 0.9627 | 0.4942 | 0.9627 | 0.9812 |
| No log | 0.9010 | 364 | 0.9823 | 0.5183 | 0.9823 | 0.9911 |
| No log | 0.9059 | 366 | 1.0074 | 0.5525 | 1.0074 | 1.0037 |
| No log | 0.9109 | 368 | 1.0543 | 0.5634 | 1.0543 | 1.0268 |
| No log | 0.9158 | 370 | 1.0450 | 0.5349 | 1.0450 | 1.0223 |
| No log | 0.9208 | 372 | 0.9326 | 0.5980 | 0.9326 | 0.9657 |
| No log | 0.9257 | 374 | 0.8543 | 0.6192 | 0.8543 | 0.9243 |
| No log | 0.9307 | 376 | 0.8404 | 0.5770 | 0.8404 | 0.9167 |
| No log | 0.9356 | 378 | 0.8401 | 0.5732 | 0.8401 | 0.9165 |
| No log | 0.9406 | 380 | 0.8883 | 0.6087 | 0.8883 | 0.9425 |
| No log | 0.9455 | 382 | 0.9518 | 0.5584 | 0.9518 | 0.9756 |
| No log | 0.9505 | 384 | 0.9798 | 0.5562 | 0.9798 | 0.9898 |
| No log | 0.9554 | 386 | 0.9811 | 0.5562 | 0.9811 | 0.9905 |
| No log | 0.9604 | 388 | 0.8988 | 0.6020 | 0.8988 | 0.9481 |
| No log | 0.9653 | 390 | 0.9030 | 0.6020 | 0.9030 | 0.9502 |
| No log | 0.9703 | 392 | 0.8765 | 0.6336 | 0.8765 | 0.9362 |
| No log | 0.9752 | 394 | 0.8368 | 0.6057 | 0.8368 | 0.9147 |
| No log | 0.9802 | 396 | 0.8354 | 0.5476 | 0.8354 | 0.9140 |
| No log | 0.9851 | 398 | 0.8349 | 0.5825 | 0.8349 | 0.9138 |
| No log | 0.9901 | 400 | 0.8809 | 0.6223 | 0.8809 | 0.9386 |
| No log | 0.9950 | 402 | 0.9729 | 0.5699 | 0.9729 | 0.9864 |
| No log | 1.0 | 404 | 1.1080 | 0.5352 | 1.1080 | 1.0526 |
| No log | 1.0050 | 406 | 1.2270 | 0.5352 | 1.2270 | 1.1077 |
| No log | 1.0099 | 408 | 1.3309 | 0.4886 | 1.3309 | 1.1537 |
| No log | 1.0149 | 410 | 1.3369 | 0.4772 | 1.3369 | 1.1562 |
| No log | 1.0198 | 412 | 1.3500 | 0.4545 | 1.3500 | 1.1619 |
| No log | 1.0248 | 414 | 1.3357 | 0.4762 | 1.3357 | 1.1557 |
| No log | 1.0297 | 416 | 1.2972 | 0.4906 | 1.2972 | 1.1389 |
| No log | 1.0347 | 418 | 1.3383 | 0.4770 | 1.3383 | 1.1569 |
| No log | 1.0396 | 420 | 1.4187 | 0.4336 | 1.4187 | 1.1911 |
| No log | 1.0446 | 422 | 1.4708 | 0.3647 | 1.4708 | 1.2128 |
| No log | 1.0495 | 424 | 1.5184 | 0.3647 | 1.5184 | 1.2322 |
| No log | 1.0545 | 426 | 1.5409 | 0.3986 | 1.5409 | 1.2413 |
| No log | 1.0594 | 428 | 1.3764 | 0.4755 | 1.3764 | 1.1732 |
| No log | 1.0644 | 430 | 1.1735 | 0.4912 | 1.1735 | 1.0833 |
| No log | 1.0693 | 432 | 1.0187 | 0.5511 | 1.0187 | 1.0093 |
| No log | 1.0743 | 434 | 0.9748 | 0.5534 | 0.9748 | 0.9873 |
| No log | 1.0792 | 436 | 0.9672 | 0.5656 | 0.9672 | 0.9835 |
| No log | 1.0842 | 438 | 0.9265 | 0.5870 | 0.9265 | 0.9626 |
| No log | 1.0891 | 440 | 0.9090 | 0.5857 | 0.9090 | 0.9534 |
| No log | 1.0941 | 442 | 0.9158 | 0.4864 | 0.9158 | 0.9570 |
| No log | 1.0990 | 444 | 0.9285 | 0.4522 | 0.9285 | 0.9636 |
| No log | 1.1040 | 446 | 0.9140 | 0.4773 | 0.9140 | 0.9560 |
| No log | 1.1089 | 448 | 0.8895 | 0.4773 | 0.8895 | 0.9431 |
| No log | 1.1139 | 450 | 0.8663 | 0.544 | 0.8663 | 0.9308 |
| No log | 1.1188 | 452 | 0.8693 | 0.544 | 0.8693 | 0.9323 |
| No log | 1.1238 | 454 | 0.9082 | 0.6241 | 0.9082 | 0.9530 |
| No log | 1.1287 | 456 | 0.9945 | 0.5785 | 0.9945 | 0.9972 |
| No log | 1.1337 | 458 | 1.0770 | 0.5162 | 1.0770 | 1.0378 |
| No log | 1.1386 | 460 | 1.1112 | 0.4912 | 1.1112 | 1.0541 |
| No log | 1.1436 | 462 | 1.0793 | 0.4912 | 1.0793 | 1.0389 |
| No log | 1.1485 | 464 | 1.0289 | 0.4713 | 1.0289 | 1.0144 |
| No log | 1.1535 | 466 | 0.9692 | 0.4481 | 0.9692 | 0.9845 |
| No log | 1.1584 | 468 | 0.9534 | 0.3919 | 0.9534 | 0.9764 |
| No log | 1.1634 | 470 | 0.9525 | 0.4219 | 0.9525 | 0.9759 |
| No log | 1.1683 | 472 | 0.9268 | 0.3931 | 0.9268 | 0.9627 |
| No log | 1.1733 | 474 | 0.9175 | 0.3694 | 0.9175 | 0.9578 |
| No log | 1.1782 | 476 | 0.9073 | 0.4498 | 0.9073 | 0.9525 |
| No log | 1.1832 | 478 | 0.8522 | 0.4771 | 0.8522 | 0.9231 |
| No log | 1.1881 | 480 | 0.8317 | 0.4838 | 0.8317 | 0.9120 |
| No log | 1.1931 | 482 | 0.8338 | 0.5932 | 0.8338 | 0.9131 |
| No log | 1.1980 | 484 | 0.8484 | 0.6119 | 0.8484 | 0.9211 |
| No log | 1.2030 | 486 | 0.8750 | 0.5557 | 0.8750 | 0.9354 |
| No log | 1.2079 | 488 | 0.8524 | 0.5814 | 0.8524 | 0.9233 |
| No log | 1.2129 | 490 | 0.8417 | 0.5909 | 0.8417 | 0.9174 |
| No log | 1.2178 | 492 | 0.8359 | 0.4714 | 0.8359 | 0.9143 |
| No log | 1.2228 | 494 | 0.8178 | 0.4137 | 0.8178 | 0.9043 |
| No log | 1.2277 | 496 | 0.7992 | 0.5099 | 0.7992 | 0.8940 |
| No log | 1.2327 | 498 | 0.8252 | 0.5613 | 0.8252 | 0.9084 |
| 0.4531 | 1.2376 | 500 | 0.8762 | 0.5146 | 0.8762 | 0.9360 |
| 0.4531 | 1.2426 | 502 | 0.8671 | 0.4629 | 0.8671 | 0.9312 |
| 0.4531 | 1.2475 | 504 | 0.9323 | 0.5661 | 0.9323 | 0.9655 |
| 0.4531 | 1.2525 | 506 | 1.2185 | 0.4653 | 1.2185 | 1.1038 |
| 0.4531 | 1.2574 | 508 | 1.5504 | 0.3422 | 1.5504 | 1.2452 |
| 0.4531 | 1.2624 | 510 | 1.8535 | 0.3255 | 1.8535 | 1.3614 |
| 0.4531 | 1.2673 | 512 | 2.0808 | 0.2289 | 2.0808 | 1.4425 |
| 0.4531 | 1.2723 | 514 | 2.0991 | 0.2174 | 2.0991 | 1.4488 |
| 0.4531 | 1.2772 | 516 | 2.0383 | 0.2450 | 2.0383 | 1.4277 |
| 0.4531 | 1.2822 | 518 | 1.9103 | 0.2450 | 1.9103 | 1.3821 |
| 0.4531 | 1.2871 | 520 | 1.7005 | 0.3761 | 1.7005 | 1.3040 |
| 0.4531 | 1.2921 | 522 | 1.5099 | 0.4190 | 1.5099 | 1.2288 |
| 0.4531 | 1.2970 | 524 | 1.3680 | 0.4402 | 1.3680 | 1.1696 |
| 0.4531 | 1.3020 | 526 | 1.2379 | 0.4526 | 1.2379 | 1.1126 |
| 0.4531 | 1.3069 | 528 | 1.0763 | 0.5027 | 1.0763 | 1.0375 |
| 0.4531 | 1.3119 | 530 | 1.0147 | 0.5798 | 1.0147 | 1.0073 |
| 0.4531 | 1.3168 | 532 | 1.0561 | 0.5331 | 1.0561 | 1.0277 |
| 0.4531 | 1.3218 | 534 | 1.0619 | 0.4944 | 1.0619 | 1.0305 |
| 0.4531 | 1.3267 | 536 | 1.1031 | 0.4698 | 1.1031 | 1.0503 |
| 0.4531 | 1.3317 | 538 | 1.0899 | 0.3889 | 1.0899 | 1.0440 |
| 0.4531 | 1.3366 | 540 | 1.0968 | 0.3447 | 1.0968 | 1.0473 |
| 0.4531 | 1.3416 | 542 | 1.0765 | 0.2987 | 1.0765 | 1.0376 |
| 0.4531 | 1.3465 | 544 | 1.0438 | 0.3123 | 1.0438 | 1.0217 |
| 0.4531 | 1.3515 | 546 | 1.0557 | 0.3022 | 1.0557 | 1.0275 |
| 0.4531 | 1.3564 | 548 | 1.1581 | 0.3746 | 1.1581 | 1.0761 |
| 0.4531 | 1.3614 | 550 | 1.2355 | 0.3864 | 1.2355 | 1.1115 |
| 0.4531 | 1.3663 | 552 | 1.3172 | 0.4408 | 1.3172 | 1.1477 |
| 0.4531 | 1.3713 | 554 | 1.3177 | 0.4227 | 1.3177 | 1.1479 |
| 0.4531 | 1.3762 | 556 | 1.3171 | 0.4234 | 1.3171 | 1.1477 |
| 0.4531 | 1.3812 | 558 | 1.2517 | 0.3394 | 1.2517 | 1.1188 |
| 0.4531 | 1.3861 | 560 | 1.2287 | 0.4142 | 1.2287 | 1.1085 |
| 0.4531 | 1.3911 | 562 | 1.2327 | 0.4292 | 1.2327 | 1.1103 |
| 0.4531 | 1.3960 | 564 | 1.2204 | 0.4018 | 1.2204 | 1.1047 |
| 0.4531 | 1.4010 | 566 | 1.1831 | 0.4018 | 1.1831 | 1.0877 |
| 0.4531 | 1.4059 | 568 | 1.1323 | 0.4009 | 1.1323 | 1.0641 |
| 0.4531 | 1.4109 | 570 | 1.0895 | 0.4018 | 1.0895 | 1.0438 |
| 0.4531 | 1.4158 | 572 | 1.0866 | 0.4288 | 1.0866 | 1.0424 |
| 0.4531 | 1.4208 | 574 | 1.1116 | 0.4136 | 1.1116 | 1.0543 |
| 0.4531 | 1.4257 | 576 | 1.1785 | 0.4142 | 1.1785 | 1.0856 |
| 0.4531 | 1.4307 | 578 | 1.1985 | 0.3264 | 1.1985 | 1.0947 |
| 0.4531 | 1.4356 | 580 | 1.1717 | 0.2776 | 1.1717 | 1.0825 |
| 0.4531 | 1.4406 | 582 | 1.1555 | 0.3591 | 1.1555 | 1.0750 |
| 0.4531 | 1.4455 | 584 | 1.1827 | 0.3591 | 1.1827 | 1.0875 |
| 0.4531 | 1.4505 | 586 | 1.1545 | 0.4142 | 1.1545 | 1.0745 |
| 0.4531 | 1.4554 | 588 | 1.1042 | 0.4027 | 1.1042 | 1.0508 |
| 0.4531 | 1.4604 | 590 | 1.0908 | 0.4027 | 1.0908 | 1.0444 |
| 0.4531 | 1.4653 | 592 | 1.0538 | 0.4167 | 1.0538 | 1.0266 |
| 0.4531 | 1.4703 | 594 | 1.0154 | 0.4639 | 1.0154 | 1.0077 |
| 0.4531 | 1.4752 | 596 | 0.9907 | 0.375 | 0.9907 | 0.9953 |
| 0.4531 | 1.4802 | 598 | 0.9940 | 0.3611 | 0.9940 | 0.9970 |
| 0.4531 | 1.4851 | 600 | 0.9836 | 0.4357 | 0.9836 | 0.9918 |
| 0.4531 | 1.4901 | 602 | 0.9253 | 0.4172 | 0.9253 | 0.9619 |
| 0.4531 | 1.4950 | 604 | 0.8319 | 0.4958 | 0.8319 | 0.9121 |
| 0.4531 | 1.5 | 606 | 0.8587 | 0.5720 | 0.8587 | 0.9267 |
| 0.4531 | 1.5050 | 608 | 0.8663 | 0.5437 | 0.8663 | 0.9307 |
| 0.4531 | 1.5099 | 610 | 0.8759 | 0.5289 | 0.8759 | 0.9359 |
| 0.4531 | 1.5149 | 612 | 0.9132 | 0.4181 | 0.9132 | 0.9556 |
| 0.4531 | 1.5198 | 614 | 0.9933 | 0.5067 | 0.9933 | 0.9966 |
| 0.4531 | 1.5248 | 616 | 1.0318 | 0.4720 | 1.0318 | 1.0158 |
| 0.4531 | 1.5297 | 618 | 1.0972 | 0.4908 | 1.0972 | 1.0475 |
| 0.4531 | 1.5347 | 620 | 1.1425 | 0.4935 | 1.1425 | 1.0689 |
| 0.4531 | 1.5396 | 622 | 1.0897 | 0.5251 | 1.0897 | 1.0439 |
| 0.4531 | 1.5446 | 624 | 1.0497 | 0.5104 | 1.0497 | 1.0246 |
| 0.4531 | 1.5495 | 626 | 0.9939 | 0.4826 | 0.9939 | 0.9969 |
| 0.4531 | 1.5545 | 628 | 0.9945 | 0.4864 | 0.9945 | 0.9972 |
| 0.4531 | 1.5594 | 630 | 1.0060 | 0.4776 | 1.0060 | 1.0030 |
| 0.4531 | 1.5644 | 632 | 0.9395 | 0.5307 | 0.9395 | 0.9693 |
| 0.4531 | 1.5693 | 634 | 0.8856 | 0.5174 | 0.8856 | 0.9410 |
| 0.4531 | 1.5743 | 636 | 0.8755 | 0.5282 | 0.8755 | 0.9357 |
| 0.4531 | 1.5792 | 638 | 0.8630 | 0.5737 | 0.8630 | 0.9290 |
| 0.4531 | 1.5842 | 640 | 0.8757 | 0.5282 | 0.8757 | 0.9358 |
| 0.4531 | 1.5891 | 642 | 0.9294 | 0.5067 | 0.9294 | 0.9641 |
| 0.4531 | 1.5941 | 644 | 0.9546 | 0.4720 | 0.9546 | 0.9771 |
| 0.4531 | 1.5990 | 646 | 1.0343 | 0.4748 | 1.0343 | 1.0170 |
| 0.4531 | 1.6040 | 648 | 1.0207 | 0.4748 | 1.0207 | 1.0103 |
| 0.4531 | 1.6089 | 650 | 0.9838 | 0.4854 | 0.9838 | 0.9918 |
| 0.4531 | 1.6139 | 652 | 0.9719 | 0.5 | 0.9719 | 0.9859 |
| 0.4531 | 1.6188 | 654 | 0.9613 | 0.5 | 0.9613 | 0.9805 |
| 0.4531 | 1.6238 | 656 | 1.0247 | 0.5385 | 1.0247 | 1.0123 |
| 0.4531 | 1.6287 | 658 | 1.0280 | 0.5287 | 1.0280 | 1.0139 |
| 0.4531 | 1.6337 | 660 | 0.9443 | 0.5 | 0.9443 | 0.9717 |
| 0.4531 | 1.6386 | 662 | 0.8940 | 0.4441 | 0.8940 | 0.9455 |
| 0.4531 | 1.6436 | 664 | 0.9269 | 0.5398 | 0.9269 | 0.9627 |
| 0.4531 | 1.6485 | 666 | 0.9596 | 0.4926 | 0.9596 | 0.9796 |
| 0.4531 | 1.6535 | 668 | 0.9790 | 0.4156 | 0.9790 | 0.9894 |
| 0.4531 | 1.6584 | 670 | 0.9903 | 0.4452 | 0.9903 | 0.9951 |
| 0.4531 | 1.6634 | 672 | 1.0195 | 0.3650 | 1.0195 | 1.0097 |
| 0.4531 | 1.6683 | 674 | 1.0359 | 0.4652 | 1.0359 | 1.0178 |
| 0.4531 | 1.6733 | 676 | 1.0504 | 0.4713 | 1.0504 | 1.0249 |
| 0.4531 | 1.6782 | 678 | 1.0495 | 0.4375 | 1.0495 | 1.0245 |
| 0.4531 | 1.6832 | 680 | 1.0497 | 0.4440 | 1.0497 | 1.0246 |
| 0.4531 | 1.6881 | 682 | 1.0802 | 0.3609 | 1.0802 | 1.0393 |
| 0.4531 | 1.6931 | 684 | 1.1347 | 0.4343 | 1.1347 | 1.0652 |
| 0.4531 | 1.6980 | 686 | 1.0292 | 0.4358 | 1.0292 | 1.0145 |
| 0.4531 | 1.7030 | 688 | 0.9660 | 0.4493 | 0.9660 | 0.9829 |
| 0.4531 | 1.7079 | 690 | 0.9407 | 0.4231 | 0.9407 | 0.9699 |
| 0.4531 | 1.7129 | 692 | 0.9090 | 0.4231 | 0.9090 | 0.9534 |
| 0.4531 | 1.7178 | 694 | 0.8895 | 0.5833 | 0.8895 | 0.9431 |
| 0.4531 | 1.7228 | 696 | 0.8752 | 0.5668 | 0.8752 | 0.9355 |
| 0.4531 | 1.7277 | 698 | 0.8695 | 0.5833 | 0.8695 | 0.9325 |
| 0.4531 | 1.7327 | 700 | 0.8986 | 0.52 | 0.8986 | 0.9480 |
| 0.4531 | 1.7376 | 702 | 0.9505 | 0.5627 | 0.9505 | 0.9749 |
| 0.4531 | 1.7426 | 704 | 1.0178 | 0.5333 | 1.0178 | 1.0089 |
| 0.4531 | 1.7475 | 706 | 1.0807 | 0.5070 | 1.0807 | 1.0396 |
| 0.4531 | 1.7525 | 708 | 1.1392 | 0.4851 | 1.1392 | 1.0674 |
| 0.4531 | 1.7574 | 710 | 1.1468 | 0.4966 | 1.1468 | 1.0709 |
| 0.4531 | 1.7624 | 712 | 1.1628 | 0.5164 | 1.1628 | 1.0783 |
| 0.4531 | 1.7673 | 714 | 1.1921 | 0.5164 | 1.1921 | 1.0918 |
| 0.4531 | 1.7723 | 716 | 1.1857 | 0.4955 | 1.1857 | 1.0889 |
| 0.4531 | 1.7772 | 718 | 1.2388 | 0.5141 | 1.2388 | 1.1130 |
| 0.4531 | 1.7822 | 720 | 1.3674 | 0.4780 | 1.3674 | 1.1694 |
| 0.4531 | 1.7871 | 722 | 1.3529 | 0.4668 | 1.3529 | 1.1631 |
| 0.4531 | 1.7921 | 724 | 1.3059 | 0.4457 | 1.3059 | 1.1428 |
| 0.4531 | 1.7970 | 726 | 1.2201 | 0.4706 | 1.2201 | 1.1046 |
| 0.4531 | 1.8020 | 728 | 1.1836 | 0.4713 | 1.1836 | 1.0879 |
| 0.4531 | 1.8069 | 730 | 1.1433 | 0.4713 | 1.1433 | 1.0692 |
| 0.4531 | 1.8119 | 732 | 1.0959 | 0.4360 | 1.0959 | 1.0468 |
| 0.4531 | 1.8168 | 734 | 1.0444 | 0.5012 | 1.0444 | 1.0220 |
| 0.4531 | 1.8218 | 736 | 1.0141 | 0.5396 | 1.0141 | 1.0070 |
| 0.4531 | 1.8267 | 738 | 0.9965 | 0.4234 | 0.9965 | 0.9982 |
| 0.4531 | 1.8317 | 740 | 0.9968 | 0.496 | 0.9968 | 0.9984 |
| 0.4531 | 1.8366 | 742 | 1.0185 | 0.5630 | 1.0185 | 1.0092 |
| 0.4531 | 1.8416 | 744 | 0.9768 | 0.5420 | 0.9768 | 0.9883 |
| 0.4531 | 1.8465 | 746 | 0.9609 | 0.5420 | 0.9609 | 0.9803 |
| 0.4531 | 1.8515 | 748 | 0.9557 | 0.5135 | 0.9557 | 0.9776 |
| 0.4531 | 1.8564 | 750 | 0.9628 | 0.5310 | 0.9628 | 0.9812 |
| 0.4531 | 1.8614 | 752 | 0.9564 | 0.4813 | 0.9564 | 0.9779 |
| 0.4531 | 1.8663 | 754 | 0.9496 | 0.4842 | 0.9496 | 0.9744 |
| 0.4531 | 1.8713 | 756 | 0.9474 | 0.5111 | 0.9474 | 0.9734 |
| 0.4531 | 1.8762 | 758 | 0.9335 | 0.4779 | 0.9335 | 0.9662 |
| 0.4531 | 1.8812 | 760 | 0.8951 | 0.5310 | 0.8951 | 0.9461 |
| 0.4531 | 1.8861 | 762 | 0.8879 | 0.6163 | 0.8879 | 0.9423 |
| 0.4531 | 1.8911 | 764 | 0.8865 | 0.616 | 0.8865 | 0.9416 |
| 0.4531 | 1.8960 | 766 | 0.8732 | 0.6419 | 0.8732 | 0.9344 |
| 0.4531 | 1.9010 | 768 | 0.8850 | 0.5789 | 0.8850 | 0.9407 |
| 0.4531 | 1.9059 | 770 | 0.9518 | 0.5533 | 0.9518 | 0.9756 |
| 0.4531 | 1.9109 | 772 | 1.0149 | 0.5297 | 1.0149 | 1.0074 |
| 0.4531 | 1.9158 | 774 | 1.1081 | 0.5489 | 1.1081 | 1.0527 |
| 0.4531 | 1.9208 | 776 | 1.2002 | 0.5132 | 1.2002 | 1.0956 |
| 0.4531 | 1.9257 | 778 | 1.2174 | 0.5132 | 1.2174 | 1.1033 |
| 0.4531 | 1.9307 | 780 | 1.1787 | 0.5252 | 1.1787 | 1.0857 |
| 0.4531 | 1.9356 | 782 | 1.1182 | 0.5468 | 1.1182 | 1.0574 |
| 0.4531 | 1.9406 | 784 | 1.1104 | 0.5468 | 1.1104 | 1.0537 |
| 0.4531 | 1.9455 | 786 | 1.1258 | 0.5469 | 1.1258 | 1.0611 |
| 0.4531 | 1.9505 | 788 | 1.1179 | 0.5699 | 1.1179 | 1.0573 |
| 0.4531 | 1.9554 | 790 | 1.0820 | 0.6023 | 1.0820 | 1.0402 |
| 0.4531 | 1.9604 | 792 | 0.9976 | 0.625 | 0.9976 | 0.9988 |
| 0.4531 | 1.9653 | 794 | 0.9486 | 0.6106 | 0.9486 | 0.9740 |
| 0.4531 | 1.9703 | 796 | 0.9242 | 0.6092 | 0.9242 | 0.9613 |
| 0.4531 | 1.9752 | 798 | 0.9161 | 0.6284 | 0.9161 | 0.9571 |
| 0.4531 | 1.9802 | 800 | 0.9069 | 0.6341 | 0.9069 | 0.9523 |
| 0.4531 | 1.9851 | 802 | 0.8950 | 0.6698 | 0.8950 | 0.9461 |
| 0.4531 | 1.9901 | 804 | 0.9013 | 0.6369 | 0.9013 | 0.9494 |
| 0.4531 | 1.9950 | 806 | 0.9040 | 0.6751 | 0.9040 | 0.9508 |
| 0.4531 | 2.0 | 808 | 0.9447 | 0.6379 | 0.9447 | 0.9720 |
| 0.4531 | 2.0050 | 810 | 0.9742 | 0.5714 | 0.9742 | 0.9870 |
| 0.4531 | 2.0099 | 812 | 0.9823 | 0.5531 | 0.9823 | 0.9911 |
| 0.4531 | 2.0149 | 814 | 0.9898 | 0.5319 | 0.9898 | 0.9949 |
| 0.4531 | 2.0198 | 816 | 0.9824 | 0.5319 | 0.9824 | 0.9911 |
| 0.4531 | 2.0248 | 818 | 1.0060 | 0.5319 | 1.0060 | 1.0030 |
| 0.4531 | 2.0297 | 820 | 1.0265 | 0.5319 | 1.0265 | 1.0132 |
| 0.4531 | 2.0347 | 822 | 1.0410 | 0.5510 | 1.0410 | 1.0203 |
| 0.4531 | 2.0396 | 824 | 1.0617 | 0.5695 | 1.0617 | 1.0304 |
| 0.4531 | 2.0446 | 826 | 1.1002 | 0.5582 | 1.1002 | 1.0489 |
| 0.4531 | 2.0495 | 828 | 1.1487 | 0.5561 | 1.1487 | 1.0718 |
| 0.4531 | 2.0545 | 830 | 1.2552 | 0.5226 | 1.2552 | 1.1203 |
| 0.4531 | 2.0594 | 832 | 1.3317 | 0.5219 | 1.3317 | 1.1540 |
| 0.4531 | 2.0644 | 834 | 1.3372 | 0.4336 | 1.3372 | 1.1564 |
| 0.4531 | 2.0693 | 836 | 1.3191 | 0.4336 | 1.3191 | 1.1485 |
| 0.4531 | 2.0743 | 838 | 1.3114 | 0.5226 | 1.3114 | 1.1452 |
| 0.4531 | 2.0792 | 840 | 1.2803 | 0.5338 | 1.2803 | 1.1315 |
| 0.4531 | 2.0842 | 842 | 1.2265 | 0.5540 | 1.2265 | 1.1075 |
| 0.4531 | 2.0891 | 844 | 1.1558 | 0.5561 | 1.1558 | 1.0751 |
| 0.4531 | 2.0941 | 846 | 1.0894 | 0.5561 | 1.0894 | 1.0437 |
| 0.4531 | 2.0990 | 848 | 1.0199 | 0.4569 | 1.0199 | 1.0099 |
| 0.4531 | 2.1040 | 850 | 1.0274 | 0.4728 | 1.0274 | 1.0136 |
| 0.4531 | 2.1089 | 852 | 1.0397 | 0.4268 | 1.0397 | 1.0197 |
| 0.4531 | 2.1139 | 854 | 1.0289 | 0.4268 | 1.0289 | 1.0143 |
| 0.4531 | 2.1188 | 856 | 1.0307 | 0.4268 | 1.0307 | 1.0152 |
| 0.4531 | 2.1238 | 858 | 1.0098 | 0.4130 | 1.0098 | 1.0049 |
| 0.4531 | 2.1287 | 860 | 0.9842 | 0.5067 | 0.9842 | 0.9921 |
| 0.4531 | 2.1337 | 862 | 0.9999 | 0.4651 | 0.9999 | 1.0000 |
| 0.4531 | 2.1386 | 864 | 1.0705 | 0.5241 | 1.0705 | 1.0347 |
| 0.4531 | 2.1436 | 866 | 1.1950 | 0.5219 | 1.1950 | 1.0932 |
| 0.4531 | 2.1485 | 868 | 1.3057 | 0.4872 | 1.3057 | 1.1427 |
| 0.4531 | 2.1535 | 870 | 1.4101 | 0.4658 | 1.4101 | 1.1875 |
| 0.4531 | 2.1584 | 872 | 1.4353 | 0.4658 | 1.4353 | 1.1980 |
| 0.4531 | 2.1634 | 874 | 1.4171 | 0.4336 | 1.4171 | 1.1904 |
| 0.4531 | 2.1683 | 876 | 1.3457 | 0.4545 | 1.3457 | 1.1600 |
| 0.4531 | 2.1733 | 878 | 1.2769 | 0.4882 | 1.2769 | 1.1300 |
| 0.4531 | 2.1782 | 880 | 1.2461 | 0.4906 | 1.2461 | 1.1163 |
| 0.4531 | 2.1832 | 882 | 1.1843 | 0.4919 | 1.1843 | 1.0883 |
| 0.4531 | 2.1881 | 884 | 1.1177 | 0.5241 | 1.1177 | 1.0572 |
| 0.4531 | 2.1931 | 886 | 1.0306 | 0.5379 | 1.0306 | 1.0152 |
| 0.4531 | 2.1980 | 888 | 0.9753 | 0.5826 | 0.9753 | 0.9876 |
| 0.4531 | 2.2030 | 890 | 0.9474 | 0.6267 | 0.9474 | 0.9734 |
| 0.4531 | 2.2079 | 892 | 0.9252 | 0.6483 | 0.9252 | 0.9619 |
| 0.4531 | 2.2129 | 894 | 0.9390 | 0.6522 | 0.9390 | 0.9690 |
| 0.4531 | 2.2178 | 896 | 0.9632 | 0.6408 | 0.9632 | 0.9814 |
| 0.4531 | 2.2228 | 898 | 0.9908 | 0.6429 | 0.9908 | 0.9954 |
| 0.4531 | 2.2277 | 900 | 0.9859 | 0.625 | 0.9859 | 0.9929 |
| 0.4531 | 2.2327 | 902 | 0.9583 | 0.5909 | 0.9583 | 0.9789 |
| 0.4531 | 2.2376 | 904 | 0.9471 | 0.5759 | 0.9471 | 0.9732 |
| 0.4531 | 2.2426 | 906 | 0.9476 | 0.5599 | 0.9476 | 0.9735 |
| 0.4531 | 2.2475 | 908 | 0.9571 | 0.5553 | 0.9571 | 0.9783 |
| 0.4531 | 2.2525 | 910 | 0.9179 | 0.625 | 0.9179 | 0.9581 |
| 0.4531 | 2.2574 | 912 | 0.8981 | 0.6562 | 0.8981 | 0.9477 |
| 0.4531 | 2.2624 | 914 | 0.8933 | 0.6152 | 0.8933 | 0.9451 |
| 0.4531 | 2.2673 | 916 | 0.8941 | 0.6152 | 0.8941 | 0.9456 |
| 0.4531 | 2.2723 | 918 | 0.8930 | 0.6186 | 0.8930 | 0.9450 |
| 0.4531 | 2.2772 | 920 | 0.8637 | 0.6538 | 0.8637 | 0.9294 |
| 0.4531 | 2.2822 | 922 | 0.8654 | 0.6331 | 0.8654 | 0.9303 |
| 0.4531 | 2.2871 | 924 | 0.9032 | 0.6434 | 0.9032 | 0.9504 |
| 0.4531 | 2.2921 | 926 | 0.9492 | 0.6325 | 0.9492 | 0.9743 |
| 0.4531 | 2.2970 | 928 | 0.9865 | 0.5826 | 0.9865 | 0.9932 |
| 0.4531 | 2.3020 | 930 | 1.0354 | 0.5362 | 1.0354 | 1.0175 |
| 0.4531 | 2.3069 | 932 | 1.0001 | 0.5714 | 1.0001 | 1.0000 |
| 0.4531 | 2.3119 | 934 | 0.9798 | 0.5947 | 0.9798 | 0.9898 |
| 0.4531 | 2.3168 | 936 | 0.9457 | 0.6092 | 0.9457 | 0.9725 |
| 0.4531 | 2.3218 | 938 | 0.9196 | 0.6026 | 0.9196 | 0.9589 |
| 0.4531 | 2.3267 | 940 | 0.8810 | 0.6473 | 0.8810 | 0.9386 |
| 0.4531 | 2.3317 | 942 | 0.8459 | 0.6556 | 0.8459 | 0.9197 |
| 0.4531 | 2.3366 | 944 | 0.8302 | 0.6556 | 0.8302 | 0.9111 |
| 0.4531 | 2.3416 | 946 | 0.8226 | 0.6373 | 0.8226 | 0.9070 |
| 0.4531 | 2.3465 | 948 | 0.8383 | 0.6453 | 0.8383 | 0.9156 |
| 0.4531 | 2.3515 | 950 | 0.8533 | 0.6351 | 0.8533 | 0.9237 |
| 0.4531 | 2.3564 | 952 | 0.8659 | 0.5929 | 0.8659 | 0.9305 |
| 0.4531 | 2.3614 | 954 | 0.8830 | 0.5929 | 0.8830 | 0.9397 |
| 0.4531 | 2.3663 | 956 | 0.8942 | 0.5929 | 0.8942 | 0.9456 |
| 0.4531 | 2.3713 | 958 | 0.8951 | 0.5919 | 0.8951 | 0.9461 |
| 0.4531 | 2.3762 | 960 | 0.8956 | 0.6036 | 0.8956 | 0.9464 |
| 0.4531 | 2.3812 | 962 | 0.9029 | 0.6331 | 0.9029 | 0.9502 |
| 0.4531 | 2.3861 | 964 | 0.9024 | 0.6331 | 0.9024 | 0.9499 |
| 0.4531 | 2.3911 | 966 | 0.8965 | 0.6294 | 0.8965 | 0.9468 |
| 0.4531 | 2.3960 | 968 | 0.8960 | 0.6294 | 0.8960 | 0.9466 |
| 0.4531 | 2.4010 | 970 | 0.9105 | 0.6294 | 0.9105 | 0.9542 |
| 0.4531 | 2.4059 | 972 | 0.9113 | 0.6331 | 0.9113 | 0.9546 |
| 0.4531 | 2.4109 | 974 | 0.9144 | 0.6311 | 0.9144 | 0.9563 |
| 0.4531 | 2.4158 | 976 | 0.9327 | 0.5919 | 0.9327 | 0.9658 |
| 0.4531 | 2.4208 | 978 | 0.9427 | 0.5757 | 0.9427 | 0.9709 |
| 0.4531 | 2.4257 | 980 | 0.9490 | 0.5730 | 0.9490 | 0.9742 |
| 0.4531 | 2.4307 | 982 | 0.9442 | 0.5919 | 0.9442 | 0.9717 |
| 0.4531 | 2.4356 | 984 | 0.9373 | 0.6331 | 0.9373 | 0.9682 |
| 0.4531 | 2.4406 | 986 | 0.9112 | 0.6331 | 0.9112 | 0.9546 |
| 0.4531 | 2.4455 | 988 | 0.8881 | 0.6331 | 0.8881 | 0.9424 |
| 0.4531 | 2.4505 | 990 | 0.8827 | 0.6130 | 0.8827 | 0.9395 |
| 0.4531 | 2.4554 | 992 | 0.8849 | 0.5714 | 0.8849 | 0.9407 |
| 0.4531 | 2.4604 | 994 | 0.9122 | 0.5757 | 0.9122 | 0.9551 |
| 0.4531 | 2.4653 | 996 | 0.9290 | 0.5757 | 0.9290 | 0.9639 |
| 0.4531 | 2.4703 | 998 | 0.8935 | 0.5535 | 0.8935 | 0.9453 |
| 0.101 | 2.4752 | 1000 | 0.8577 | 0.5671 | 0.8577 | 0.9261 |
| 0.101 | 2.4802 | 1002 | 0.9091 | 0.5739 | 0.9091 | 0.9535 |
| 0.101 | 2.4851 | 1004 | 0.9939 | 0.5179 | 0.9939 | 0.9969 |
| 0.101 | 2.4901 | 1006 | 1.0597 | 0.4940 | 1.0597 | 1.0294 |
| 0.101 | 2.4950 | 1008 | 1.1228 | 0.4930 | 1.1228 | 1.0596 |
| 0.101 | 2.5 | 1010 | 1.1437 | 0.4930 | 1.1437 | 1.0694 |
| 0.101 | 2.5050 | 1012 | 1.1651 | 0.4919 | 1.1651 | 1.0794 |
| 0.101 | 2.5099 | 1014 | 1.1902 | 0.5113 | 1.1902 | 1.0910 |
| 0.101 | 2.5149 | 1016 | 1.1543 | 0.5113 | 1.1543 | 1.0744 |
| 0.101 | 2.5198 | 1018 | 1.0650 | 0.5157 | 1.0650 | 1.0320 |
| 0.101 | 2.5248 | 1020 | 0.9622 | 0.5157 | 0.9622 | 0.9809 |
| 0.101 | 2.5297 | 1022 | 0.9064 | 0.5947 | 0.9064 | 0.9520 |
| 0.101 | 2.5347 | 1024 | 0.8693 | 0.5909 | 0.8693 | 0.9323 |
| 0.101 | 2.5396 | 1026 | 0.8490 | 0.5970 | 0.8490 | 0.9214 |
| 0.101 | 2.5446 | 1028 | 0.8423 | 0.5774 | 0.8423 | 0.9177 |
| 0.101 | 2.5495 | 1030 | 0.8640 | 0.5848 | 0.8640 | 0.9295 |
| 0.101 | 2.5545 | 1032 | 0.9033 | 0.5970 | 0.9033 | 0.9504 |
| 0.101 | 2.5594 | 1034 | 0.9100 | 0.6231 | 0.9100 | 0.9540 |
| 0.101 | 2.5644 | 1036 | 0.9096 | 0.6231 | 0.9096 | 0.9537 |
| 0.101 | 2.5693 | 1038 | 0.8905 | 0.6556 | 0.8905 | 0.9436 |
| 0.101 | 2.5743 | 1040 | 0.8722 | 0.6177 | 0.8722 | 0.9339 |
| 0.101 | 2.5792 | 1042 | 0.8691 | 0.6177 | 0.8691 | 0.9322 |
| 0.101 | 2.5842 | 1044 | 0.8744 | 0.6177 | 0.8744 | 0.9351 |
| 0.101 | 2.5891 | 1046 | 0.8819 | 0.6213 | 0.8819 | 0.9391 |
| 0.101 | 2.5941 | 1048 | 0.9001 | 0.6003 | 0.9001 | 0.9488 |
| 0.101 | 2.5990 | 1050 | 0.9119 | 0.6003 | 0.9119 | 0.9549 |
| 0.101 | 2.6040 | 1052 | 0.9214 | 0.5970 | 0.9214 | 0.9599 |
| 0.101 | 2.6089 | 1054 | 0.9139 | 0.6295 | 0.9139 | 0.9560 |
| 0.101 | 2.6139 | 1056 | 0.8926 | 0.6556 | 0.8926 | 0.9448 |
| 0.101 | 2.6188 | 1058 | 0.8813 | 0.6556 | 0.8813 | 0.9388 |
| 0.101 | 2.6238 | 1060 | 0.8784 | 0.6556 | 0.8784 | 0.9372 |
| 0.101 | 2.6287 | 1062 | 0.8789 | 0.6354 | 0.8789 | 0.9375 |
| 0.101 | 2.6337 | 1064 | 0.8951 | 0.6418 | 0.8951 | 0.9461 |
| 0.101 | 2.6386 | 1066 | 0.9391 | 0.6555 | 0.9391 | 0.9691 |
| 0.101 | 2.6436 | 1068 | 0.9723 | 0.5714 | 0.9723 | 0.9860 |
| 0.101 | 2.6485 | 1070 | 0.9952 | 0.4988 | 0.9952 | 0.9976 |
| 0.101 | 2.6535 | 1072 | 0.9809 | 0.5765 | 0.9809 | 0.9904 |
| 0.101 | 2.6584 | 1074 | 0.9323 | 0.6303 | 0.9323 | 0.9655 |
| 0.101 | 2.6634 | 1076 | 0.8825 | 0.6555 | 0.8825 | 0.9394 |
| 0.101 | 2.6683 | 1078 | 0.8598 | 0.6378 | 0.8598 | 0.9272 |
| 0.101 | 2.6733 | 1080 | 0.8550 | 0.6496 | 0.8550 | 0.9247 |
| 0.101 | 2.6782 | 1082 | 0.8450 | 0.6315 | 0.8450 | 0.9192 |
| 0.101 | 2.6832 | 1084 | 0.8442 | 0.6315 | 0.8442 | 0.9188 |
| 0.101 | 2.6881 | 1086 | 0.8558 | 0.6315 | 0.8558 | 0.9251 |
| 0.101 | 2.6931 | 1088 | 0.8650 | 0.6194 | 0.8650 | 0.9301 |
| 0.101 | 2.6980 | 1090 | 0.8944 | 0.6555 | 0.8944 | 0.9457 |
| 0.101 | 2.7030 | 1092 | 0.9539 | 0.5765 | 0.9539 | 0.9767 |
| 0.101 | 2.7079 | 1094 | 0.9876 | 0.5319 | 0.9876 | 0.9938 |
| 0.101 | 2.7129 | 1096 | 0.9885 | 0.5490 | 0.9885 | 0.9942 |
| 0.101 | 2.7178 | 1098 | 0.9466 | 0.5625 | 0.9466 | 0.9730 |
| 0.101 | 2.7228 | 1100 | 1.0204 | 0.5280 | 1.0204 | 1.0102 |
| 0.101 | 2.7277 | 1102 | 1.0364 | 0.5512 | 1.0364 | 1.0180 |
| 0.101 | 2.7327 | 1104 | 0.9761 | 0.5557 | 0.9761 | 0.9880 |
| 0.101 | 2.7376 | 1106 | 0.9599 | 0.5349 | 0.9599 | 0.9797 |
| 0.101 | 2.7426 | 1108 | 0.9704 | 0.5349 | 0.9704 | 0.9851 |
| 0.101 | 2.7475 | 1110 | 1.0163 | 0.5345 | 1.0163 | 1.0081 |
| 0.101 | 2.7525 | 1112 | 1.0853 | 0.4926 | 1.0853 | 1.0418 |
| 0.101 | 2.7574 | 1114 | 1.2063 | 0.4659 | 1.2063 | 1.0983 |
| 0.101 | 2.7624 | 1116 | 1.3288 | 0.4309 | 1.3288 | 1.1527 |
| 0.101 | 2.7673 | 1118 | 1.4265 | 0.4309 | 1.4265 | 1.1944 |
| 0.101 | 2.7723 | 1120 | 1.4631 | 0.4309 | 1.4631 | 1.2096 |
| 0.101 | 2.7772 | 1122 | 1.4259 | 0.4309 | 1.4259 | 1.1941 |
| 0.101 | 2.7822 | 1124 | 1.2998 | 0.4309 | 1.2998 | 1.1401 |
| 0.101 | 2.7871 | 1126 | 1.2017 | 0.4901 | 1.2017 | 1.0962 |
| 0.101 | 2.7921 | 1128 | 1.1066 | 0.5073 | 1.1066 | 1.0520 |
| 0.101 | 2.7970 | 1130 | 1.0354 | 0.5101 | 1.0354 | 1.0175 |
| 0.101 | 2.8020 | 1132 | 1.0077 | 0.5353 | 1.0077 | 1.0039 |
| 0.101 | 2.8069 | 1134 | 0.9908 | 0.5602 | 0.9908 | 0.9954 |
| 0.101 | 2.8119 | 1136 | 1.0138 | 0.5532 | 1.0138 | 1.0069 |
| 0.101 | 2.8168 | 1138 | 0.9900 | 0.5771 | 0.9900 | 0.9950 |
| 0.101 | 2.8218 | 1140 | 0.9489 | 0.5791 | 0.9489 | 0.9741 |
| 0.101 | 2.8267 | 1142 | 0.9352 | 0.5848 | 0.9352 | 0.9671 |
| 0.101 | 2.8317 | 1144 | 0.9032 | 0.5848 | 0.9032 | 0.9504 |
| 0.101 | 2.8366 | 1146 | 0.9016 | 0.6194 | 0.9016 | 0.9495 |
| 0.101 | 2.8416 | 1148 | 0.8883 | 0.6231 | 0.8883 | 0.9425 |
| 0.101 | 2.8465 | 1150 | 0.8632 | 0.6159 | 0.8632 | 0.9291 |
| 0.101 | 2.8515 | 1152 | 0.8546 | 0.6194 | 0.8546 | 0.9244 |
| 0.101 | 2.8564 | 1154 | 0.8716 | 0.6194 | 0.8716 | 0.9336 |
| 0.101 | 2.8614 | 1156 | 0.9087 | 0.6340 | 0.9087 | 0.9533 |
| 0.101 | 2.8663 | 1158 | 0.9608 | 0.6303 | 0.9608 | 0.9802 |
| 0.101 | 2.8713 | 1160 | 1.0439 | 0.5302 | 1.0439 | 1.0217 |
| 0.101 | 2.8762 | 1162 | 0.9373 | 0.6303 | 0.9373 | 0.9682 |
| 0.101 | 2.8812 | 1164 | 0.8919 | 0.6159 | 0.8919 | 0.9444 |
| 0.101 | 2.8861 | 1166 | 0.8528 | 0.6127 | 0.8528 | 0.9235 |
| 0.101 | 2.8911 | 1168 | 0.8240 | 0.625 | 0.8240 | 0.9077 |
| 0.101 | 2.8960 | 1170 | 0.8112 | 0.6710 | 0.8112 | 0.9007 |
| 0.101 | 2.9010 | 1172 | 0.8242 | 0.6327 | 0.8242 | 0.9079 |
| 0.101 | 2.9059 | 1174 | 0.8424 | 0.5984 | 0.8424 | 0.9178 |
| 0.101 | 2.9109 | 1176 | 0.8528 | 0.5951 | 0.8528 | 0.9235 |
| 0.101 | 2.9158 | 1178 | 0.8540 | 0.6043 | 0.8540 | 0.9241 |
| 0.101 | 2.9208 | 1180 | 0.8448 | 0.6077 | 0.8448 | 0.9191 |
| 0.101 | 2.9257 | 1182 | 0.8478 | 0.6057 | 0.8478 | 0.9208 |
| 0.101 | 2.9307 | 1184 | 0.8742 | 0.6163 | 0.8742 | 0.9350 |
| 0.101 | 2.9356 | 1186 | 0.8832 | 0.6036 | 0.8832 | 0.9398 |
| 0.101 | 2.9406 | 1188 | 0.9030 | 0.6036 | 0.9030 | 0.9503 |
| 0.101 | 2.9455 | 1190 | 0.9277 | 0.5909 | 0.9277 | 0.9632 |
| 0.101 | 2.9505 | 1192 | 0.9559 | 0.5878 | 0.9559 | 0.9777 |
| 0.101 | 2.9554 | 1194 | 0.9528 | 0.5970 | 0.9528 | 0.9761 |
| 0.101 | 2.9604 | 1196 | 0.9292 | 0.6194 | 0.9292 | 0.9640 |
| 0.101 | 2.9653 | 1198 | 0.9073 | 0.6315 | 0.9073 | 0.9525 |
| 0.101 | 2.9703 | 1200 | 0.8912 | 0.6057 | 0.8912 | 0.9440 |
| 0.101 | 2.9752 | 1202 | 0.9094 | 0.6315 | 0.9094 | 0.9536 |
| 0.101 | 2.9802 | 1204 | 0.9419 | 0.5878 | 0.9419 | 0.9705 |
| 0.101 | 2.9851 | 1206 | 0.9696 | 0.5878 | 0.9696 | 0.9847 |
| 0.101 | 2.9901 | 1208 | 0.9952 | 0.5523 | 0.9952 | 0.9976 |
| 0.101 | 2.9950 | 1210 | 0.9840 | 0.55 | 0.9840 | 0.9920 |
| 0.101 | 3.0 | 1212 | 0.9607 | 0.6072 | 0.9607 | 0.9802 |
| 0.101 | 3.0050 | 1214 | 0.9378 | 0.6072 | 0.9378 | 0.9684 |
| 0.101 | 3.0099 | 1216 | 0.9204 | 0.6072 | 0.9204 | 0.9594 |
| 0.101 | 3.0149 | 1218 | 0.9164 | 0.6072 | 0.9164 | 0.9573 |
| 0.101 | 3.0198 | 1220 | 0.9171 | 0.6072 | 0.9171 | 0.9576 |
| 0.101 | 3.0248 | 1222 | 0.8745 | 0.6194 | 0.8745 | 0.9351 |
| 0.101 | 3.0297 | 1224 | 0.8620 | 0.5990 | 0.8620 | 0.9284 |
| 0.101 | 3.0347 | 1226 | 0.8599 | 0.5990 | 0.8599 | 0.9273 |
| 0.101 | 3.0396 | 1228 | 0.8871 | 0.6159 | 0.8871 | 0.9419 |
| 0.101 | 3.0446 | 1230 | 0.8941 | 0.6159 | 0.8941 | 0.9456 |
| 0.101 | 3.0495 | 1232 | 0.9134 | 0.6039 | 0.9134 | 0.9557 |
| 0.101 | 3.0545 | 1234 | 0.9431 | 0.5269 | 0.9431 | 0.9711 |
| 0.101 | 3.0594 | 1236 | 1.0348 | 0.5329 | 1.0348 | 1.0172 |
| 0.101 | 3.0644 | 1238 | 1.1082 | 0.5060 | 1.1082 | 1.0527 |
| 0.101 | 3.0693 | 1240 | 1.1864 | 0.4700 | 1.1864 | 1.0892 |
| 0.101 | 3.0743 | 1242 | 1.2512 | 0.4896 | 1.2512 | 1.1185 |
| 0.101 | 3.0792 | 1244 | 1.2510 | 0.4896 | 1.2510 | 1.1185 |
| 0.101 | 3.0842 | 1246 | 1.2398 | 0.4896 | 1.2398 | 1.1135 |
| 0.101 | 3.0891 | 1248 | 1.2351 | 0.4668 | 1.2351 | 1.1114 |
| 0.101 | 3.0941 | 1250 | 1.1999 | 0.4668 | 1.1999 | 1.0954 |
| 0.101 | 3.0990 | 1252 | 1.1389 | 0.4901 | 1.1389 | 1.0672 |
| 0.101 | 3.1040 | 1254 | 1.0374 | 0.5489 | 1.0374 | 1.0185 |
| 0.101 | 3.1089 | 1256 | 0.9634 | 0.5850 | 0.9634 | 0.9815 |
| 0.101 | 3.1139 | 1258 | 0.9134 | 0.5677 | 0.9134 | 0.9557 |
| 0.101 | 3.1188 | 1260 | 0.8204 | 0.6058 | 0.8204 | 0.9058 |
| 0.101 | 3.1238 | 1262 | 0.7740 | 0.6183 | 0.7740 | 0.8797 |
| 0.101 | 3.1287 | 1264 | 0.7511 | 0.628 | 0.7511 | 0.8667 |
| 0.101 | 3.1337 | 1266 | 0.7294 | 0.6322 | 0.7294 | 0.8540 |
| 0.101 | 3.1386 | 1268 | 0.7290 | 0.6341 | 0.7290 | 0.8538 |
| 0.101 | 3.1436 | 1270 | 0.7362 | 0.6575 | 0.7362 | 0.8580 |
| 0.101 | 3.1485 | 1272 | 0.7431 | 0.6547 | 0.7431 | 0.8620 |
| 0.101 | 3.1535 | 1274 | 0.7510 | 0.6547 | 0.7510 | 0.8666 |
| 0.101 | 3.1584 | 1276 | 0.7649 | 0.6547 | 0.7649 | 0.8746 |
| 0.101 | 3.1634 | 1278 | 0.7632 | 0.6729 | 0.7632 | 0.8736 |
| 0.101 | 3.1683 | 1280 | 0.7382 | 0.6904 | 0.7382 | 0.8592 |
| 0.101 | 3.1733 | 1282 | 0.7236 | 0.6696 | 0.7236 | 0.8507 |
| 0.101 | 3.1782 | 1284 | 0.7359 | 0.6858 | 0.7359 | 0.8578 |
| 0.101 | 3.1832 | 1286 | 0.7603 | 0.6718 | 0.7603 | 0.8719 |
| 0.101 | 3.1881 | 1288 | 0.8130 | 0.6979 | 0.8130 | 0.9017 |
| 0.101 | 3.1931 | 1290 | 0.9613 | 0.5510 | 0.9613 | 0.9805 |
| 0.101 | 3.1980 | 1292 | 1.1366 | 0.4819 | 1.1366 | 1.0661 |
| 0.101 | 3.2030 | 1294 | 1.2894 | 0.4693 | 1.2894 | 1.1355 |
| 0.101 | 3.2079 | 1296 | 1.3756 | 0.4455 | 1.3756 | 1.1729 |
| 0.101 | 3.2129 | 1298 | 1.4756 | 0.4187 | 1.4756 | 1.2148 |
| 0.101 | 3.2178 | 1300 | 1.5520 | 0.3934 | 1.5520 | 1.2458 |
| 0.101 | 3.2228 | 1302 | 1.5840 | 0.3145 | 1.5840 | 1.2586 |
| 0.101 | 3.2277 | 1304 | 1.5798 | 0.3145 | 1.5798 | 1.2569 |
| 0.101 | 3.2327 | 1306 | 1.5353 | 0.3145 | 1.5353 | 1.2391 |
| 0.101 | 3.2376 | 1308 | 1.4915 | 0.3145 | 1.4915 | 1.2213 |
| 0.101 | 3.2426 | 1310 | 1.4569 | 0.3828 | 1.4569 | 1.2070 |
| 0.101 | 3.2475 | 1312 | 1.3614 | 0.4882 | 1.3614 | 1.1668 |
| 0.101 | 3.2525 | 1314 | 1.2258 | 0.4896 | 1.2258 | 1.1071 |
| 0.101 | 3.2574 | 1316 | 1.0925 | 0.5034 | 1.0925 | 1.0452 |
| 0.101 | 3.2624 | 1318 | 0.9024 | 0.5995 | 0.9024 | 0.9499 |
| 0.101 | 3.2673 | 1320 | 0.8225 | 0.5984 | 0.8225 | 0.9069 |
| 0.101 | 3.2723 | 1322 | 0.8138 | 0.6327 | 0.8138 | 0.9021 |
| 0.101 | 3.2772 | 1324 | 0.8149 | 0.6327 | 0.8149 | 0.9027 |
| 0.101 | 3.2822 | 1326 | 0.8206 | 0.6327 | 0.8206 | 0.9059 |
| 0.101 | 3.2871 | 1328 | 0.8304 | 0.6202 | 0.8304 | 0.9113 |
| 0.101 | 3.2921 | 1330 | 0.8340 | 0.6327 | 0.8340 | 0.9132 |
| 0.101 | 3.2970 | 1332 | 0.8357 | 0.6327 | 0.8357 | 0.9142 |
| 0.101 | 3.3020 | 1334 | 0.8491 | 0.6202 | 0.8491 | 0.9215 |
| 0.101 | 3.3069 | 1336 | 0.8576 | 0.6202 | 0.8576 | 0.9261 |
| 0.101 | 3.3119 | 1338 | 0.8519 | 0.6202 | 0.8519 | 0.9230 |
| 0.101 | 3.3168 | 1340 | 0.8426 | 0.5984 | 0.8426 | 0.9179 |
| 0.101 | 3.3218 | 1342 | 0.8468 | 0.5622 | 0.8468 | 0.9202 |
| 0.101 | 3.3267 | 1344 | 0.8675 | 0.568 | 0.8675 | 0.9314 |
| 0.101 | 3.3317 | 1346 | 0.8741 | 0.5462 | 0.8741 | 0.9349 |
| 0.101 | 3.3366 | 1348 | 0.8540 | 0.5229 | 0.8540 | 0.9241 |
| 0.101 | 3.3416 | 1350 | 0.8484 | 0.5487 | 0.8484 | 0.9211 |
| 0.101 | 3.3465 | 1352 | 0.8639 | 0.5898 | 0.8639 | 0.9295 |
| 0.101 | 3.3515 | 1354 | 0.8750 | 0.5898 | 0.8750 | 0.9354 |
| 0.101 | 3.3564 | 1356 | 0.8808 | 0.5888 | 0.8808 | 0.9385 |
| 0.101 | 3.3614 | 1358 | 0.8658 | 0.5781 | 0.8658 | 0.9305 |
| 0.101 | 3.3663 | 1360 | 0.8768 | 0.5781 | 0.8768 | 0.9364 |
| 0.101 | 3.3713 | 1362 | 0.8912 | 0.5827 | 0.8912 | 0.9440 |
| 0.101 | 3.3762 | 1364 | 0.9036 | 0.5725 | 0.9036 | 0.9506 |
| 0.101 | 3.3812 | 1366 | 0.9073 | 0.5725 | 0.9073 | 0.9525 |
| 0.101 | 3.3861 | 1368 | 0.9063 | 0.5753 | 0.9063 | 0.9520 |
| 0.101 | 3.3911 | 1370 | 0.9548 | 0.5799 | 0.9548 | 0.9772 |
| 0.101 | 3.3960 | 1372 | 1.0279 | 0.5695 | 1.0279 | 1.0139 |
| 0.101 | 3.4010 | 1374 | 1.0704 | 0.5470 | 1.0704 | 1.0346 |
| 0.101 | 3.4059 | 1376 | 1.1019 | 0.5149 | 1.1019 | 1.0497 |
| 0.101 | 3.4109 | 1378 | 1.1261 | 0.4836 | 1.1261 | 1.0612 |
| 0.101 | 3.4158 | 1380 | 1.1502 | 0.4836 | 1.1502 | 1.0725 |
| 0.101 | 3.4208 | 1382 | 1.1617 | 0.4718 | 1.1617 | 1.0778 |
| 0.101 | 3.4257 | 1384 | 1.1714 | 0.4919 | 1.1714 | 1.0823 |
| 0.101 | 3.4307 | 1386 | 1.2188 | 0.4706 | 1.2188 | 1.1040 |
| 0.101 | 3.4356 | 1388 | 1.2738 | 0.4896 | 1.2738 | 1.1286 |
| 0.101 | 3.4406 | 1390 | 1.3039 | 0.4896 | 1.3039 | 1.1419 |
| 0.101 | 3.4455 | 1392 | 1.2842 | 0.4896 | 1.2842 | 1.1332 |
| 0.101 | 3.4505 | 1394 | 1.2353 | 0.4896 | 1.2353 | 1.1114 |
| 0.101 | 3.4554 | 1396 | 1.2079 | 0.4896 | 1.2079 | 1.0991 |
| 0.101 | 3.4604 | 1398 | 1.1879 | 0.4693 | 1.1879 | 1.0899 |
| 0.101 | 3.4653 | 1400 | 1.1786 | 0.4693 | 1.1786 | 1.0856 |
| 0.101 | 3.4703 | 1402 | 1.1788 | 0.4896 | 1.1788 | 1.0857 |
| 0.101 | 3.4752 | 1404 | 1.1762 | 0.4896 | 1.1762 | 1.0845 |
| 0.101 | 3.4802 | 1406 | 1.1989 | 0.4896 | 1.1989 | 1.0949 |
| 0.101 | 3.4851 | 1408 | 1.2019 | 0.4896 | 1.2019 | 1.0963 |
| 0.101 | 3.4901 | 1410 | 1.2017 | 0.4896 | 1.2017 | 1.0962 |
| 0.101 | 3.4950 | 1412 | 1.1580 | 0.5118 | 1.1580 | 1.0761 |
| 0.101 | 3.5 | 1414 | 1.1453 | 0.5118 | 1.1453 | 1.0702 |
| 0.101 | 3.5050 | 1416 | 1.1347 | 0.5118 | 1.1347 | 1.0652 |
| 0.101 | 3.5099 | 1418 | 1.1291 | 0.5118 | 1.1291 | 1.0626 |
| 0.101 | 3.5149 | 1420 | 1.0888 | 0.5586 | 1.0888 | 1.0435 |
| 0.101 | 3.5198 | 1422 | 1.0374 | 0.5377 | 1.0374 | 1.0185 |
| 0.101 | 3.5248 | 1424 | 0.9934 | 0.5695 | 0.9934 | 0.9967 |
| 0.101 | 3.5297 | 1426 | 0.9190 | 0.6379 | 0.9190 | 0.9586 |
| 0.101 | 3.5347 | 1428 | 0.8825 | 0.6196 | 0.8825 | 0.9394 |
| 0.101 | 3.5396 | 1430 | 0.8921 | 0.6196 | 0.8921 | 0.9445 |
| 0.101 | 3.5446 | 1432 | 0.9728 | 0.6078 | 0.9728 | 0.9863 |
| 0.101 | 3.5495 | 1434 | 1.0704 | 0.5157 | 1.0704 | 1.0346 |
| 0.101 | 3.5545 | 1436 | 1.1237 | 0.5157 | 1.1237 | 1.0601 |
| 0.101 | 3.5594 | 1438 | 1.2054 | 0.5241 | 1.2054 | 1.0979 |
| 0.101 | 3.5644 | 1440 | 1.2767 | 0.5235 | 1.2767 | 1.1299 |
| 0.101 | 3.5693 | 1442 | 1.3555 | 0.4896 | 1.3555 | 1.1643 |
| 0.101 | 3.5743 | 1444 | 1.3926 | 0.4455 | 1.3926 | 1.1801 |
| 0.101 | 3.5792 | 1446 | 1.4469 | 0.4244 | 1.4469 | 1.2029 |
| 0.101 | 3.5842 | 1448 | 1.4894 | 0.4247 | 1.4894 | 1.2204 |
| 0.101 | 3.5891 | 1450 | 1.4869 | 0.4663 | 1.4869 | 1.2194 |
| 0.101 | 3.5941 | 1452 | 1.4729 | 0.4658 | 1.4729 | 1.2136 |
| 0.101 | 3.5990 | 1454 | 1.3963 | 0.5095 | 1.3963 | 1.1817 |
| 0.101 | 3.6040 | 1456 | 1.2876 | 0.5235 | 1.2876 | 1.1347 |
| 0.101 | 3.6089 | 1458 | 1.1949 | 0.5241 | 1.1949 | 1.0931 |
| 0.101 | 3.6139 | 1460 | 1.0700 | 0.5060 | 1.0700 | 1.0344 |
| 0.101 | 3.6188 | 1462 | 0.9534 | 0.5753 | 0.9534 | 0.9764 |
| 0.101 | 3.6238 | 1464 | 0.9064 | 0.5675 | 0.9064 | 0.9521 |
| 0.101 | 3.6287 | 1466 | 0.8793 | 0.5825 | 0.8793 | 0.9377 |
| 0.101 | 3.6337 | 1468 | 0.9016 | 0.5675 | 0.9016 | 0.9496 |
| 0.101 | 3.6386 | 1470 | 0.9438 | 0.5396 | 0.9438 | 0.9715 |
| 0.101 | 3.6436 | 1472 | 1.0206 | 0.5329 | 1.0206 | 1.0102 |
| 0.101 | 3.6485 | 1474 | 1.1250 | 0.5047 | 1.1250 | 1.0606 |
| 0.101 | 3.6535 | 1476 | 1.2445 | 0.5241 | 1.2445 | 1.1156 |
| 0.101 | 3.6584 | 1478 | 1.3218 | 0.5229 | 1.3218 | 1.1497 |
| 0.101 | 3.6634 | 1480 | 1.3964 | 0.4882 | 1.3964 | 1.1817 |
| 0.101 | 3.6683 | 1482 | 1.4309 | 0.4663 | 1.4309 | 1.1962 |
| 0.101 | 3.6733 | 1484 | 1.4040 | 0.4663 | 1.4040 | 1.1849 |
| 0.101 | 3.6782 | 1486 | 1.3474 | 0.4882 | 1.3474 | 1.1608 |
| 0.101 | 3.6832 | 1488 | 1.3025 | 0.4882 | 1.3025 | 1.1413 |
| 0.101 | 3.6881 | 1490 | 1.2920 | 0.4674 | 1.2920 | 1.1367 |
| 0.101 | 3.6931 | 1492 | 1.2651 | 0.4882 | 1.2651 | 1.1248 |
| 0.101 | 3.6980 | 1494 | 1.2601 | 0.4882 | 1.2601 | 1.1225 |
| 0.101 | 3.7030 | 1496 | 1.2742 | 0.5 | 1.2742 | 1.1288 |
| 0.101 | 3.7079 | 1498 | 1.2666 | 0.5 | 1.2666 | 1.1254 |
| 0.0681 | 3.7129 | 1500 | 1.2292 | 0.5023 | 1.2292 | 1.1087 |
| 0.0681 | 3.7178 | 1502 | 1.2107 | 0.5023 | 1.2107 | 1.1003 |
| 0.0681 | 3.7228 | 1504 | 1.1793 | 0.5023 | 1.1793 | 1.0860 |
| 0.0681 | 3.7277 | 1506 | 1.1399 | 0.5023 | 1.1399 | 1.0677 |
| 0.0681 | 3.7327 | 1508 | 1.1306 | 0.5023 | 1.1306 | 1.0633 |
| 0.0681 | 3.7376 | 1510 | 1.1056 | 0.5258 | 1.1056 | 1.0515 |
| 0.0681 | 3.7426 | 1512 | 1.1394 | 0.4829 | 1.1394 | 1.0674 |
| 0.0681 | 3.7475 | 1514 | 1.1810 | 0.4686 | 1.1810 | 1.0868 |
| 0.0681 | 3.7525 | 1516 | 1.1989 | 0.4680 | 1.1989 | 1.0950 |
| 0.0681 | 3.7574 | 1518 | 1.1848 | 0.4680 | 1.1848 | 1.0885 |
| 0.0681 | 3.7624 | 1520 | 1.1618 | 0.4686 | 1.1618 | 1.0779 |
| 0.0681 | 3.7673 | 1522 | 1.1536 | 0.4803 | 1.1536 | 1.0740 |
| 0.0681 | 3.7723 | 1524 | 1.1601 | 0.4803 | 1.1601 | 1.0771 |
| 0.0681 | 3.7772 | 1526 | 1.1796 | 0.4803 | 1.1796 | 1.0861 |
| 0.0681 | 3.7822 | 1528 | 1.2056 | 0.4301 | 1.2056 | 1.0980 |
| 0.0681 | 3.7871 | 1530 | 1.1837 | 0.4671 | 1.1837 | 1.0880 |
| 0.0681 | 3.7921 | 1532 | 1.1579 | 0.4671 | 1.1579 | 1.0760 |
| 0.0681 | 3.7970 | 1534 | 1.1356 | 0.4923 | 1.1356 | 1.0657 |
| 0.0681 | 3.8020 | 1536 | 1.1130 | 0.4923 | 1.1130 | 1.0550 |
| 0.0681 | 3.8069 | 1538 | 1.0893 | 0.4923 | 1.0893 | 1.0437 |
| 0.0681 | 3.8119 | 1540 | 1.0921 | 0.5138 | 1.0921 | 1.0451 |
| 0.0681 | 3.8168 | 1542 | 1.0539 | 0.5511 | 1.0539 | 1.0266 |
| 0.0681 | 3.8218 | 1544 | 1.0029 | 0.5737 | 1.0029 | 1.0015 |
| 0.0681 | 3.8267 | 1546 | 0.9487 | 0.5866 | 0.9487 | 0.9740 |
| 0.0681 | 3.8317 | 1548 | 0.9420 | 0.5627 | 0.9420 | 0.9705 |
| 0.0681 | 3.8366 | 1550 | 0.9395 | 0.5418 | 0.9395 | 0.9693 |
| 0.0681 | 3.8416 | 1552 | 0.9373 | 0.5312 | 0.9373 | 0.9681 |
| 0.0681 | 3.8465 | 1554 | 0.9406 | 0.5312 | 0.9406 | 0.9699 |
| 0.0681 | 3.8515 | 1556 | 0.9621 | 0.5418 | 0.9621 | 0.9809 |
| 0.0681 | 3.8564 | 1558 | 0.9972 | 0.5532 | 0.9972 | 0.9986 |
| 0.0681 | 3.8614 | 1560 | 1.0486 | 0.5510 | 1.0486 | 1.0240 |
| 0.0681 | 3.8663 | 1562 | 1.0833 | 0.5699 | 1.0833 | 1.0408 |
| 0.0681 | 3.8713 | 1564 | 1.0848 | 0.5699 | 1.0848 | 1.0415 |
| 0.0681 | 3.8762 | 1566 | 1.0718 | 0.5699 | 1.0718 | 1.0353 |
| 0.0681 | 3.8812 | 1568 | 1.0423 | 0.5751 | 1.0423 | 1.0209 |
| 0.0681 | 3.8861 | 1570 | 1.0171 | 0.5751 | 1.0171 | 1.0085 |
| 0.0681 | 3.8911 | 1572 | 1.0169 | 0.5751 | 1.0169 | 1.0084 |
| 0.0681 | 3.8960 | 1574 | 1.0470 | 0.5751 | 1.0470 | 1.0232 |
| 0.0681 | 3.9010 | 1576 | 1.0938 | 0.5584 | 1.0938 | 1.0458 |
| 0.0681 | 3.9059 | 1578 | 1.1116 | 0.5468 | 1.1116 | 1.0543 |
| 0.0681 | 3.9109 | 1580 | 1.1125 | 0.5699 | 1.1125 | 1.0547 |
| 0.0681 | 3.9158 | 1582 | 1.1231 | 0.5584 | 1.1231 | 1.0598 |
| 0.0681 | 3.9208 | 1584 | 1.1094 | 0.5584 | 1.1094 | 1.0533 |
| 0.0681 | 3.9257 | 1586 | 1.0893 | 0.5468 | 1.0893 | 1.0437 |
| 0.0681 | 3.9307 | 1588 | 1.0806 | 0.5371 | 1.0806 | 1.0395 |
| 0.0681 | 3.9356 | 1590 | 1.0802 | 0.5187 | 1.0802 | 1.0393 |
| 0.0681 | 3.9406 | 1592 | 1.0974 | 0.4938 | 1.0974 | 1.0476 |
| 0.0681 | 3.9455 | 1594 | 1.0922 | 0.4951 | 1.0922 | 1.0451 |
| 0.0681 | 3.9505 | 1596 | 1.0790 | 0.5607 | 1.0790 | 1.0388 |
| 0.0681 | 3.9554 | 1598 | 1.0792 | 0.5584 | 1.0792 | 1.0388 |
| 0.0681 | 3.9604 | 1600 | 1.0587 | 0.5584 | 1.0587 | 1.0289 |
| 0.0681 | 3.9653 | 1602 | 1.0505 | 0.5584 | 1.0505 | 1.0249 |
| 0.0681 | 3.9703 | 1604 | 1.0546 | 0.5788 | 1.0546 | 1.0269 |
| 0.0681 | 3.9752 | 1606 | 1.0339 | 0.5814 | 1.0339 | 1.0168 |
| 0.0681 | 3.9802 | 1608 | 1.0324 | 0.5841 | 1.0324 | 1.0161 |
| 0.0681 | 3.9851 | 1610 | 1.0482 | 0.5631 | 1.0482 | 1.0238 |
| 0.0681 | 3.9901 | 1612 | 1.0495 | 0.5631 | 1.0495 | 1.0244 |
| 0.0681 | 3.9950 | 1614 | 1.0523 | 0.5631 | 1.0523 | 1.0258 |
| 0.0681 | 4.0 | 1616 | 1.0757 | 0.5607 | 1.0757 | 1.0372 |
| 0.0681 | 4.0050 | 1618 | 1.0924 | 0.5490 | 1.0924 | 1.0452 |
| 0.0681 | 4.0099 | 1620 | 1.1196 | 0.5489 | 1.1196 | 1.0581 |
| 0.0681 | 4.0149 | 1622 | 1.1518 | 0.5118 | 1.1518 | 1.0732 |
| 0.0681 | 4.0198 | 1624 | 1.1235 | 0.5371 | 1.1235 | 1.0600 |
| 0.0681 | 4.0248 | 1626 | 1.0771 | 0.5179 | 1.0771 | 1.0378 |
| 0.0681 | 4.0297 | 1628 | 1.0367 | 0.5652 | 1.0367 | 1.0182 |
| 0.0681 | 4.0347 | 1630 | 1.0266 | 0.5652 | 1.0266 | 1.0132 |
| 0.0681 | 4.0396 | 1632 | 1.0348 | 0.5628 | 1.0348 | 1.0173 |
| 0.0681 | 4.0446 | 1634 | 1.0374 | 0.5510 | 1.0374 | 1.0185 |
| 0.0681 | 4.0495 | 1636 | 0.9989 | 0.5928 | 0.9989 | 0.9995 |
| 0.0681 | 4.0545 | 1638 | 0.9662 | 0.5458 | 0.9662 | 0.9830 |
| 0.0681 | 4.0594 | 1640 | 0.9335 | 0.5061 | 0.9335 | 0.9662 |
| 0.0681 | 4.0644 | 1642 | 0.9355 | 0.5263 | 0.9355 | 0.9672 |
| 0.0681 | 4.0693 | 1644 | 0.9457 | 0.5458 | 0.9457 | 0.9725 |
| 0.0681 | 4.0743 | 1646 | 0.9483 | 0.5458 | 0.9483 | 0.9738 |
| 0.0681 | 4.0792 | 1648 | 0.9896 | 0.5215 | 0.9896 | 0.9948 |
| 0.0681 | 4.0842 | 1650 | 1.0448 | 0.5398 | 1.0448 | 1.0222 |
| 0.0681 | 4.0891 | 1652 | 1.0958 | 0.5671 | 1.0958 | 1.0468 |
| 0.0681 | 4.0941 | 1654 | 1.1185 | 0.5671 | 1.1185 | 1.0576 |
| 0.0681 | 4.0990 | 1656 | 1.1244 | 0.5671 | 1.1244 | 1.0604 |
| 0.0681 | 4.1040 | 1658 | 1.1278 | 0.5671 | 1.1278 | 1.0620 |
| 0.0681 | 4.1089 | 1660 | 1.1133 | 0.5377 | 1.1133 | 1.0551 |
| 0.0681 | 4.1139 | 1662 | 1.0908 | 0.5377 | 1.0908 | 1.0444 |
| 0.0681 | 4.1188 | 1664 | 1.0628 | 0.5089 | 1.0628 | 1.0309 |
| 0.0681 | 4.1238 | 1666 | 1.0409 | 0.52 | 1.0409 | 1.0202 |
| 0.0681 | 4.1287 | 1668 | 1.0231 | 0.52 | 1.0231 | 1.0115 |
| 0.0681 | 4.1337 | 1670 | 0.9915 | 0.56 | 0.9915 | 0.9957 |
| 0.0681 | 4.1386 | 1672 | 0.9418 | 0.6126 | 0.9418 | 0.9705 |
| 0.0681 | 4.1436 | 1674 | 0.8727 | 0.6035 | 0.8727 | 0.9342 |
| 0.0681 | 4.1485 | 1676 | 0.8494 | 0.6341 | 0.8494 | 0.9216 |
| 0.0681 | 4.1535 | 1678 | 0.8895 | 0.6432 | 0.8895 | 0.9431 |
| 0.0681 | 4.1584 | 1680 | 0.9429 | 0.5937 | 0.9429 | 0.9710 |
| 0.0681 | 4.1634 | 1682 | 1.0203 | 0.5714 | 1.0203 | 1.0101 |
| 0.0681 | 4.1683 | 1684 | 1.1116 | 0.5374 | 1.1116 | 1.0543 |
| 0.0681 | 4.1733 | 1686 | 1.1475 | 0.5229 | 1.1475 | 1.0712 |
| 0.0681 | 4.1782 | 1688 | 1.1508 | 0.5229 | 1.1508 | 1.0728 |
| 0.0681 | 4.1832 | 1690 | 1.1584 | 0.5229 | 1.1584 | 1.0763 |
| 0.0681 | 4.1881 | 1692 | 1.1555 | 0.5025 | 1.1555 | 1.0749 |
| 0.0681 | 4.1931 | 1694 | 1.1655 | 0.5025 | 1.1655 | 1.0796 |
| 0.0681 | 4.1980 | 1696 | 1.1275 | 0.5025 | 1.1275 | 1.0618 |
| 0.0681 | 4.2030 | 1698 | 1.0761 | 0.5025 | 1.0761 | 1.0373 |
| 0.0681 | 4.2079 | 1700 | 1.0227 | 0.4829 | 1.0227 | 1.0113 |
| 0.0681 | 4.2129 | 1702 | 0.9750 | 0.4829 | 0.9750 | 0.9874 |
| 0.0681 | 4.2178 | 1704 | 0.9627 | 0.5203 | 0.9627 | 0.9812 |
| 0.0681 | 4.2228 | 1706 | 0.9701 | 0.4963 | 0.9701 | 0.9850 |
| 0.0681 | 4.2277 | 1708 | 0.9768 | 0.5187 | 0.9768 | 0.9883 |
| 0.0681 | 4.2327 | 1710 | 0.9591 | 0.5390 | 0.9591 | 0.9793 |
| 0.0681 | 4.2376 | 1712 | 0.9209 | 0.5850 | 0.9209 | 0.9596 |
| 0.0681 | 4.2426 | 1714 | 0.8953 | 0.5841 | 0.8953 | 0.9462 |
| 0.0681 | 4.2475 | 1716 | 0.8764 | 0.5841 | 0.8764 | 0.9362 |
| 0.0681 | 4.2525 | 1718 | 0.8900 | 0.5533 | 0.8900 | 0.9434 |
| 0.0681 | 4.2574 | 1720 | 0.8613 | 0.5533 | 0.8613 | 0.9281 |
| 0.0681 | 4.2624 | 1722 | 0.8318 | 0.5373 | 0.8318 | 0.9121 |
| 0.0681 | 4.2673 | 1724 | 0.8104 | 0.6358 | 0.8104 | 0.9002 |
| 0.0681 | 4.2723 | 1726 | 0.7909 | 0.6358 | 0.7909 | 0.8894 |
| 0.0681 | 4.2772 | 1728 | 0.8193 | 0.5369 | 0.8193 | 0.9051 |
| 0.0681 | 4.2822 | 1730 | 0.8576 | 0.5345 | 0.8576 | 0.9261 |
| 0.0681 | 4.2871 | 1732 | 0.8974 | 0.5763 | 0.8974 | 0.9473 |
| 0.0681 | 4.2921 | 1734 | 0.9340 | 0.5763 | 0.9340 | 0.9664 |
| 0.0681 | 4.2970 | 1736 | 0.9609 | 0.5763 | 0.9609 | 0.9802 |
| 0.0681 | 4.3020 | 1738 | 1.0108 | 0.5613 | 1.0108 | 1.0054 |
| 0.0681 | 4.3069 | 1740 | 1.0550 | 0.5489 | 1.0550 | 1.0271 |
| 0.0681 | 4.3119 | 1742 | 0.9550 | 0.5511 | 0.9550 | 0.9772 |
| 0.0681 | 4.3168 | 1744 | 0.8792 | 0.6029 | 0.8792 | 0.9377 |
| 0.0681 | 4.3218 | 1746 | 0.8341 | 0.6432 | 0.8341 | 0.9133 |
| 0.0681 | 4.3267 | 1748 | 0.8750 | 0.6029 | 0.8750 | 0.9354 |
| 0.0681 | 4.3317 | 1750 | 0.9749 | 0.5719 | 0.9749 | 0.9874 |
| 0.0681 | 4.3366 | 1752 | 1.0004 | 0.5511 | 1.0004 | 1.0002 |
| 0.0681 | 4.3416 | 1754 | 1.0054 | 0.5393 | 1.0054 | 1.0027 |
| 0.0681 | 4.3465 | 1756 | 0.9990 | 0.5194 | 0.9990 | 0.9995 |
| 0.0681 | 4.3515 | 1758 | 0.9964 | 0.5194 | 0.9964 | 0.9982 |
| 0.0681 | 4.3564 | 1760 | 0.9775 | 0.5417 | 0.9775 | 0.9887 |
| 0.0681 | 4.3614 | 1762 | 0.9720 | 0.5739 | 0.9720 | 0.9859 |
| 0.0681 | 4.3663 | 1764 | 0.9811 | 0.5417 | 0.9811 | 0.9905 |
| 0.0681 | 4.3713 | 1766 | 0.9886 | 0.5417 | 0.9886 | 0.9943 |
| 0.0681 | 4.3762 | 1768 | 1.0128 | 0.5417 | 1.0128 | 1.0064 |
| 0.0681 | 4.3812 | 1770 | 1.0402 | 0.5193 | 1.0402 | 1.0199 |
| 0.0681 | 4.3861 | 1772 | 1.0613 | 0.5193 | 1.0613 | 1.0302 |
| 0.0681 | 4.3911 | 1774 | 1.0727 | 0.5379 | 1.0727 | 1.0357 |
| 0.0681 | 4.3960 | 1776 | 1.0681 | 0.5379 | 1.0681 | 1.0335 |
| 0.0681 | 4.4010 | 1778 | 1.0634 | 0.5379 | 1.0634 | 1.0312 |
| 0.0681 | 4.4059 | 1780 | 1.0493 | 0.5379 | 1.0493 | 1.0243 |
| 0.0681 | 4.4109 | 1782 | 1.0246 | 0.5379 | 1.0246 | 1.0122 |
| 0.0681 | 4.4158 | 1784 | 1.0154 | 0.5379 | 1.0154 | 1.0077 |
| 0.0681 | 4.4208 | 1786 | 1.0000 | 0.5379 | 1.0000 | 1.0000 |
| 0.0681 | 4.4257 | 1788 | 1.0010 | 0.5379 | 1.0010 | 1.0005 |
| 0.0681 | 4.4307 | 1790 | 0.9835 | 0.5918 | 0.9835 | 0.9917 |
| 0.0681 | 4.4356 | 1792 | 0.9588 | 0.6109 | 0.9588 | 0.9792 |
| 0.0681 | 4.4406 | 1794 | 0.9326 | 0.6325 | 0.9326 | 0.9657 |
| 0.0681 | 4.4455 | 1796 | 0.9185 | 0.6341 | 0.9185 | 0.9584 |
| 0.0681 | 4.4505 | 1798 | 0.8961 | 0.6283 | 0.8961 | 0.9466 |
| 0.0681 | 4.4554 | 1800 | 0.8774 | 0.5938 | 0.8774 | 0.9367 |
| 0.0681 | 4.4604 | 1802 | 0.8711 | 0.5938 | 0.8711 | 0.9333 |
| 0.0681 | 4.4653 | 1804 | 0.8767 | 0.6179 | 0.8767 | 0.9363 |
| 0.0681 | 4.4703 | 1806 | 0.8876 | 0.6179 | 0.8876 | 0.9421 |
| 0.0681 | 4.4752 | 1808 | 0.8920 | 0.6179 | 0.8920 | 0.9445 |
| 0.0681 | 4.4802 | 1810 | 0.9160 | 0.5938 | 0.9160 | 0.9571 |
| 0.0681 | 4.4851 | 1812 | 0.9442 | 0.5909 | 0.9442 | 0.9717 |
| 0.0681 | 4.4901 | 1814 | 0.9561 | 0.5973 | 0.9561 | 0.9778 |
| 0.0681 | 4.4950 | 1816 | 0.9497 | 0.5973 | 0.9497 | 0.9745 |
| 0.0681 | 4.5 | 1818 | 0.9205 | 0.6114 | 0.9205 | 0.9594 |
| 0.0681 | 4.5050 | 1820 | 0.8745 | 0.6179 | 0.8745 | 0.9352 |
| 0.0681 | 4.5099 | 1822 | 0.8628 | 0.6408 | 0.8628 | 0.9289 |
| 0.0681 | 4.5149 | 1824 | 0.8697 | 0.6678 | 0.8697 | 0.9326 |
| 0.0681 | 4.5198 | 1826 | 0.8772 | 0.6763 | 0.8772 | 0.9366 |
| 0.0681 | 4.5248 | 1828 | 0.8770 | 0.6763 | 0.8770 | 0.9365 |
| 0.0681 | 4.5297 | 1830 | 0.8776 | 0.6596 | 0.8776 | 0.9368 |
| 0.0681 | 4.5347 | 1832 | 0.8765 | 0.6555 | 0.8765 | 0.9362 |
| 0.0681 | 4.5396 | 1834 | 0.8781 | 0.6515 | 0.8781 | 0.9371 |
| 0.0681 | 4.5446 | 1836 | 0.8857 | 0.6329 | 0.8857 | 0.9411 |
| 0.0681 | 4.5495 | 1838 | 0.9032 | 0.6408 | 0.9032 | 0.9504 |
| 0.0681 | 4.5545 | 1840 | 0.9326 | 0.6207 | 0.9326 | 0.9657 |
| 0.0681 | 4.5594 | 1842 | 0.9635 | 0.5812 | 0.9635 | 0.9816 |
| 0.0681 | 4.5644 | 1844 | 0.9881 | 0.5787 | 0.9881 | 0.9940 |
| 0.0681 | 4.5693 | 1846 | 0.9776 | 0.5787 | 0.9776 | 0.9887 |
| 0.0681 | 4.5743 | 1848 | 0.9461 | 0.5918 | 0.9461 | 0.9727 |
| 0.0681 | 4.5792 | 1850 | 0.9158 | 0.6083 | 0.9158 | 0.9570 |
| 0.0681 | 4.5842 | 1852 | 0.8872 | 0.6146 | 0.8872 | 0.9419 |
| 0.0681 | 4.5891 | 1854 | 0.8592 | 0.6197 | 0.8592 | 0.9270 |
| 0.0681 | 4.5941 | 1856 | 0.8405 | 0.6387 | 0.8405 | 0.9168 |
| 0.0681 | 4.5990 | 1858 | 0.8363 | 0.6366 | 0.8363 | 0.9145 |
| 0.0681 | 4.6040 | 1860 | 0.8329 | 0.6694 | 0.8329 | 0.9126 |
| 0.0681 | 4.6089 | 1862 | 0.8195 | 0.6667 | 0.8195 | 0.9053 |
| 0.0681 | 4.6139 | 1864 | 0.8020 | 0.6830 | 0.8020 | 0.8955 |
| 0.0681 | 4.6188 | 1866 | 0.7944 | 0.6860 | 0.7944 | 0.8913 |
| 0.0681 | 4.6238 | 1868 | 0.7901 | 0.6875 | 0.7901 | 0.8889 |
| 0.0681 | 4.6287 | 1870 | 0.7941 | 0.6875 | 0.7941 | 0.8911 |
| 0.0681 | 4.6337 | 1872 | 0.7942 | 0.6875 | 0.7942 | 0.8912 |
| 0.0681 | 4.6386 | 1874 | 0.7946 | 0.6815 | 0.7946 | 0.8914 |
| 0.0681 | 4.6436 | 1876 | 0.8055 | 0.6705 | 0.8055 | 0.8975 |
| 0.0681 | 4.6485 | 1878 | 0.8200 | 0.6705 | 0.8200 | 0.9055 |
| 0.0681 | 4.6535 | 1880 | 0.8341 | 0.6705 | 0.8341 | 0.9133 |
| 0.0681 | 4.6584 | 1882 | 0.8348 | 0.6865 | 0.8348 | 0.9137 |
| 0.0681 | 4.6634 | 1884 | 0.8373 | 0.6865 | 0.8373 | 0.9150 |
| 0.0681 | 4.6683 | 1886 | 0.8447 | 0.6821 | 0.8447 | 0.9191 |
| 0.0681 | 4.6733 | 1888 | 0.8496 | 0.6738 | 0.8496 | 0.9217 |
| 0.0681 | 4.6782 | 1890 | 0.8535 | 0.6738 | 0.8535 | 0.9238 |
| 0.0681 | 4.6832 | 1892 | 0.8613 | 0.6765 | 0.8613 | 0.9281 |
| 0.0681 | 4.6881 | 1894 | 0.8606 | 0.6531 | 0.8606 | 0.9277 |
| 0.0681 | 4.6931 | 1896 | 0.8416 | 0.6738 | 0.8416 | 0.9174 |
| 0.0681 | 4.6980 | 1898 | 0.8107 | 0.6705 | 0.8107 | 0.9004 |
| 0.0681 | 4.7030 | 1900 | 0.7867 | 0.6793 | 0.7867 | 0.8869 |
| 0.0681 | 4.7079 | 1902 | 0.7775 | 0.6908 | 0.7775 | 0.8818 |
| 0.0681 | 4.7129 | 1904 | 0.7771 | 0.7021 | 0.7771 | 0.8815 |
| 0.0681 | 4.7178 | 1906 | 0.7774 | 0.6986 | 0.7774 | 0.8817 |
| 0.0681 | 4.7228 | 1908 | 0.7717 | 0.7021 | 0.7717 | 0.8785 |
| 0.0681 | 4.7277 | 1910 | 0.7625 | 0.7021 | 0.7625 | 0.8732 |
| 0.0681 | 4.7327 | 1912 | 0.7531 | 0.7021 | 0.7531 | 0.8678 |
| 0.0681 | 4.7376 | 1914 | 0.7492 | 0.6793 | 0.7492 | 0.8656 |
| 0.0681 | 4.7426 | 1916 | 0.7586 | 0.6793 | 0.7586 | 0.8710 |
| 0.0681 | 4.7475 | 1918 | 0.7874 | 0.6606 | 0.7874 | 0.8873 |
| 0.0681 | 4.7525 | 1920 | 0.8230 | 0.6369 | 0.8230 | 0.9072 |
| 0.0681 | 4.7574 | 1922 | 0.8593 | 0.6077 | 0.8593 | 0.9270 |
| 0.0681 | 4.7624 | 1924 | 0.8840 | 0.5955 | 0.8840 | 0.9402 |
| 0.0681 | 4.7673 | 1926 | 0.8920 | 0.5728 | 0.8920 | 0.9445 |
| 0.0681 | 4.7723 | 1928 | 0.8956 | 0.5734 | 0.8956 | 0.9464 |
| 0.0681 | 4.7772 | 1930 | 0.8857 | 0.5984 | 0.8857 | 0.9411 |
| 0.0681 | 4.7822 | 1932 | 0.8694 | 0.6301 | 0.8694 | 0.9324 |
| 0.0681 | 4.7871 | 1934 | 0.8728 | 0.6337 | 0.8728 | 0.9343 |
| 0.0681 | 4.7921 | 1936 | 0.8891 | 0.5728 | 0.8891 | 0.9429 |
| 0.0681 | 4.7970 | 1938 | 0.9068 | 0.5622 | 0.9068 | 0.9522 |
| 0.0681 | 4.8020 | 1940 | 0.9297 | 0.5622 | 0.9297 | 0.9642 |
| 0.0681 | 4.8069 | 1942 | 0.9433 | 0.5622 | 0.9433 | 0.9712 |
| 0.0681 | 4.8119 | 1944 | 0.9651 | 0.5622 | 0.9651 | 0.9824 |
| 0.0681 | 4.8168 | 1946 | 1.0251 | 0.5714 | 1.0251 | 1.0125 |
| 0.0681 | 4.8218 | 1948 | 1.0730 | 0.5171 | 1.0730 | 1.0359 |
| 0.0681 | 4.8267 | 1950 | 1.0989 | 0.5263 | 1.0989 | 1.0483 |
| 0.0681 | 4.8317 | 1952 | 1.1226 | 0.5127 | 1.1226 | 1.0595 |
| 0.0681 | 4.8366 | 1954 | 1.1218 | 0.4930 | 1.1218 | 1.0592 |
| 0.0681 | 4.8416 | 1956 | 1.1067 | 0.4930 | 1.1067 | 1.0520 |
| 0.0681 | 4.8465 | 1958 | 1.0902 | 0.4930 | 1.0902 | 1.0441 |
| 0.0681 | 4.8515 | 1960 | 1.0885 | 0.5171 | 1.0885 | 1.0433 |
| 0.0681 | 4.8564 | 1962 | 1.0879 | 0.5187 | 1.0879 | 1.0430 |
| 0.0681 | 4.8614 | 1964 | 1.0862 | 0.5187 | 1.0862 | 1.0422 |
| 0.0681 | 4.8663 | 1966 | 1.0740 | 0.4754 | 1.0740 | 1.0363 |
| 0.0681 | 4.8713 | 1968 | 1.0605 | 0.5187 | 1.0605 | 1.0298 |
| 0.0681 | 4.8762 | 1970 | 1.0498 | 0.5187 | 1.0498 | 1.0246 |
| 0.0681 | 4.8812 | 1972 | 1.0474 | 0.5073 | 1.0474 | 1.0234 |
| 0.0681 | 4.8861 | 1974 | 1.0227 | 0.5217 | 1.0227 | 1.0113 |
| 0.0681 | 4.8911 | 1976 | 0.9947 | 0.5415 | 0.9947 | 0.9973 |
| 0.0681 | 4.8960 | 1978 | 0.9718 | 0.5129 | 0.9718 | 0.9858 |
| 0.0681 | 4.9010 | 1980 | 0.9655 | 0.4928 | 0.9655 | 0.9826 |
| 0.0681 | 4.9059 | 1982 | 0.9830 | 0.5024 | 0.9830 | 0.9915 |
| 0.0681 | 4.9109 | 1984 | 0.9974 | 0.5224 | 0.9974 | 0.9987 |
| 0.0681 | 4.9158 | 1986 | 0.9831 | 0.5224 | 0.9831 | 0.9915 |
| 0.0681 | 4.9208 | 1988 | 0.9761 | 0.5224 | 0.9761 | 0.9880 |
| 0.0681 | 4.9257 | 1990 | 0.9924 | 0.5217 | 0.9924 | 0.9962 |
| 0.0681 | 4.9307 | 1992 | 1.0033 | 0.5217 | 1.0033 | 1.0016 |
| 0.0681 | 4.9356 | 1994 | 1.0230 | 0.5433 | 1.0230 | 1.0114 |
| 0.0681 | 4.9406 | 1996 | 1.0429 | 0.5171 | 1.0429 | 1.0212 |
| 0.0681 | 4.9455 | 1998 | 1.0287 | 0.5410 | 1.0287 | 1.0143 |
| 0.055 | 4.9505 | 2000 | 1.0013 | 0.5217 | 1.0013 | 1.0007 |
| 0.055 | 4.9554 | 2002 | 0.9675 | 0.4799 | 0.9675 | 0.9836 |
| 0.055 | 4.9604 | 2004 | 0.9536 | 0.5150 | 0.9537 | 0.9766 |
| 0.055 | 4.9653 | 2006 | 0.9363 | 0.5396 | 0.9363 | 0.9676 |
| 0.055 | 4.9703 | 2008 | 0.9281 | 0.5720 | 0.9281 | 0.9634 |
| 0.055 | 4.9752 | 2010 | 0.9050 | 0.5990 | 0.9050 | 0.9513 |
| 0.055 | 4.9802 | 2012 | 0.8830 | 0.5990 | 0.8830 | 0.9397 |
| 0.055 | 4.9851 | 2014 | 0.8582 | 0.5990 | 0.8582 | 0.9264 |
| 0.055 | 4.9901 | 2016 | 0.8498 | 0.5990 | 0.8498 | 0.9218 |
| 0.055 | 4.9950 | 2018 | 0.8492 | 0.6003 | 0.8492 | 0.9215 |
| 0.055 | 5.0 | 2020 | 0.8333 | 0.6127 | 0.8333 | 0.9129 |
| 0.055 | 5.0050 | 2022 | 0.8266 | 0.6111 | 0.8266 | 0.9092 |
| 0.055 | 5.0099 | 2024 | 0.8352 | 0.6003 | 0.8352 | 0.9139 |
| 0.055 | 5.0149 | 2026 | 0.8653 | 0.6003 | 0.8653 | 0.9302 |
| 0.055 | 5.0198 | 2028 | 0.8936 | 0.5848 | 0.8936 | 0.9453 |
| 0.055 | 5.0248 | 2030 | 0.9101 | 0.5848 | 0.9101 | 0.9540 |
| 0.055 | 5.0297 | 2032 | 0.9167 | 0.5599 | 0.9167 | 0.9574 |
| 0.055 | 5.0347 | 2034 | 0.9311 | 0.5599 | 0.9311 | 0.9649 |
| 0.055 | 5.0396 | 2036 | 0.9904 | 0.5315 | 0.9904 | 0.9952 |
| 0.055 | 5.0446 | 2038 | 1.0909 | 0.5127 | 1.0909 | 1.0445 |
| 0.055 | 5.0495 | 2040 | 1.1980 | 0.5 | 1.1980 | 1.0945 |
| 0.055 | 5.0545 | 2042 | 1.2815 | 0.4668 | 1.2815 | 1.1320 |
| 0.055 | 5.0594 | 2044 | 1.3323 | 0.4668 | 1.3323 | 1.1542 |
| 0.055 | 5.0644 | 2046 | 1.3731 | 0.4430 | 1.3731 | 1.1718 |
| 0.055 | 5.0693 | 2048 | 1.3973 | 0.4307 | 1.3973 | 1.1821 |
| 0.055 | 5.0743 | 2050 | 1.3801 | 0.4430 | 1.3801 | 1.1748 |
| 0.055 | 5.0792 | 2052 | 1.3449 | 0.4208 | 1.3449 | 1.1597 |
| 0.055 | 5.0842 | 2054 | 1.3006 | 0.4208 | 1.3006 | 1.1404 |
| 0.055 | 5.0891 | 2056 | 1.2717 | 0.4432 | 1.2717 | 1.1277 |
| 0.055 | 5.0941 | 2058 | 1.2670 | 0.4432 | 1.2670 | 1.1256 |
| 0.055 | 5.0990 | 2060 | 1.2350 | 0.4432 | 1.2350 | 1.1113 |
| 0.055 | 5.1040 | 2062 | 1.2165 | 0.4432 | 1.2165 | 1.1029 |
| 0.055 | 5.1089 | 2064 | 1.2341 | 0.4432 | 1.2341 | 1.1109 |
| 0.055 | 5.1139 | 2066 | 1.2485 | 0.4432 | 1.2485 | 1.1173 |
| 0.055 | 5.1188 | 2068 | 1.2688 | 0.4653 | 1.2688 | 1.1264 |
| 0.055 | 5.1238 | 2070 | 1.2596 | 0.4430 | 1.2596 | 1.1223 |
| 0.055 | 5.1287 | 2072 | 1.2465 | 0.4668 | 1.2465 | 1.1165 |
| 0.055 | 5.1337 | 2074 | 1.2126 | 0.5229 | 1.2126 | 1.1012 |
| 0.055 | 5.1386 | 2076 | 1.1624 | 0.5229 | 1.1624 | 1.0781 |
| 0.055 | 5.1436 | 2078 | 1.1357 | 0.5246 | 1.1357 | 1.0657 |
| 0.055 | 5.1485 | 2080 | 1.1055 | 0.5246 | 1.1055 | 1.0514 |
| 0.055 | 5.1535 | 2082 | 1.0686 | 0.5246 | 1.0686 | 1.0337 |
| 0.055 | 5.1584 | 2084 | 1.0647 | 0.5051 | 1.0647 | 1.0318 |
| 0.055 | 5.1634 | 2086 | 1.0760 | 0.5051 | 1.0760 | 1.0373 |
| 0.055 | 5.1683 | 2088 | 1.0887 | 0.5263 | 1.0887 | 1.0434 |
| 0.055 | 5.1733 | 2090 | 1.1250 | 0.5246 | 1.1250 | 1.0606 |
| 0.055 | 5.1782 | 2092 | 1.1546 | 0.5246 | 1.1546 | 1.0745 |
| 0.055 | 5.1832 | 2094 | 1.1813 | 0.5246 | 1.1813 | 1.0869 |
| 0.055 | 5.1881 | 2096 | 1.2033 | 0.4901 | 1.2033 | 1.0969 |
| 0.055 | 5.1931 | 2098 | 1.2071 | 0.4901 | 1.2071 | 1.0987 |
| 0.055 | 5.1980 | 2100 | 1.2069 | 0.4901 | 1.2069 | 1.0986 |
| 0.055 | 5.2030 | 2102 | 1.1992 | 0.4901 | 1.1992 | 1.0951 |
| 0.055 | 5.2079 | 2104 | 1.1910 | 0.4901 | 1.1910 | 1.0913 |
| 0.055 | 5.2129 | 2106 | 1.1409 | 0.5038 | 1.1409 | 1.0681 |
| 0.055 | 5.2178 | 2108 | 1.1294 | 0.5246 | 1.1294 | 1.0628 |
| 0.055 | 5.2228 | 2110 | 1.1414 | 0.5246 | 1.1414 | 1.0684 |
| 0.055 | 5.2277 | 2112 | 1.1495 | 0.5252 | 1.1495 | 1.0722 |
| 0.055 | 5.2327 | 2114 | 1.1755 | 0.5012 | 1.1755 | 1.0842 |
| 0.055 | 5.2376 | 2116 | 1.2168 | 0.5012 | 1.2168 | 1.1031 |
| 0.055 | 5.2426 | 2118 | 1.2753 | 0.5012 | 1.2753 | 1.1293 |
| 0.055 | 5.2475 | 2120 | 1.3359 | 0.4780 | 1.3359 | 1.1558 |
| 0.055 | 5.2525 | 2122 | 1.2640 | 0.5012 | 1.2640 | 1.1243 |
| 0.055 | 5.2574 | 2124 | 1.2056 | 0.4812 | 1.2056 | 1.0980 |
| 0.055 | 5.2624 | 2126 | 1.1756 | 0.4943 | 1.1756 | 1.0842 |
| 0.055 | 5.2673 | 2128 | 1.1622 | 0.5135 | 1.1622 | 1.0780 |
| 0.055 | 5.2723 | 2130 | 1.1466 | 0.4945 | 1.1466 | 1.0708 |
| 0.055 | 5.2772 | 2132 | 1.1265 | 0.5186 | 1.1265 | 1.0614 |
| 0.055 | 5.2822 | 2134 | 1.1004 | 0.5598 | 1.1004 | 1.0490 |
| 0.055 | 5.2871 | 2136 | 1.0721 | 0.5900 | 1.0721 | 1.0354 |
| 0.055 | 5.2921 | 2138 | 1.0522 | 0.5900 | 1.0522 | 1.0258 |
| 0.055 | 5.2970 | 2140 | 1.0369 | 0.5753 | 1.0369 | 1.0183 |
| 0.055 | 5.3020 | 2142 | 1.0094 | 0.5759 | 1.0094 | 1.0047 |
| 0.055 | 5.3069 | 2144 | 0.9881 | 0.5759 | 0.9881 | 0.9940 |
| 0.055 | 5.3119 | 2146 | 0.9650 | 0.5855 | 0.9650 | 0.9823 |
| 0.055 | 5.3168 | 2148 | 0.9479 | 0.6129 | 0.9479 | 0.9736 |
| 0.055 | 5.3218 | 2150 | 0.9492 | 0.6116 | 0.9492 | 0.9743 |
| 0.055 | 5.3267 | 2152 | 0.9841 | 0.6024 | 0.9841 | 0.9920 |
| 0.055 | 5.3317 | 2154 | 1.0199 | 0.6035 | 1.0199 | 1.0099 |
| 0.055 | 5.3366 | 2156 | 1.0300 | 0.5845 | 1.0300 | 1.0149 |
| 0.055 | 5.3416 | 2158 | 1.0353 | 0.5845 | 1.0353 | 1.0175 |
| 0.055 | 5.3465 | 2160 | 1.0529 | 0.5937 | 1.0529 | 1.0261 |
| 0.055 | 5.3515 | 2162 | 1.0907 | 0.5490 | 1.0907 | 1.0444 |
| 0.055 | 5.3564 | 2164 | 1.1056 | 0.5179 | 1.1056 | 1.0515 |
| 0.055 | 5.3614 | 2166 | 1.1135 | 0.5377 | 1.1135 | 1.0552 |
| 0.055 | 5.3663 | 2168 | 1.1135 | 0.5377 | 1.1135 | 1.0552 |
| 0.055 | 5.3713 | 2170 | 1.0930 | 0.5490 | 1.0930 | 1.0455 |
| 0.055 | 5.3762 | 2172 | 1.0730 | 0.5490 | 1.0730 | 1.0358 |
| 0.055 | 5.3812 | 2174 | 1.0465 | 0.5490 | 1.0465 | 1.0230 |
| 0.055 | 5.3861 | 2176 | 1.0257 | 0.5510 | 1.0257 | 1.0128 |
| 0.055 | 5.3911 | 2178 | 1.0111 | 0.5739 | 1.0111 | 1.0055 |
| 0.055 | 5.3960 | 2180 | 0.9929 | 0.5937 | 0.9929 | 0.9964 |
| 0.055 | 5.4010 | 2182 | 0.9606 | 0.5937 | 0.9606 | 0.9801 |
| 0.055 | 5.4059 | 2184 | 0.9234 | 0.5845 | 0.9234 | 0.9609 |
| 0.055 | 5.4109 | 2186 | 0.8946 | 0.5973 | 0.8946 | 0.9459 |
| 0.055 | 5.4158 | 2188 | 0.8685 | 0.6020 | 0.8685 | 0.9319 |
| 0.055 | 5.4208 | 2190 | 0.8674 | 0.6064 | 0.8674 | 0.9313 |
| 0.055 | 5.4257 | 2192 | 0.8783 | 0.5824 | 0.8783 | 0.9372 |
| 0.055 | 5.4307 | 2194 | 0.8916 | 0.5574 | 0.8916 | 0.9443 |
| 0.055 | 5.4356 | 2196 | 0.9095 | 0.5759 | 0.9095 | 0.9537 |
| 0.055 | 5.4406 | 2198 | 0.9349 | 0.5739 | 0.9349 | 0.9669 |
| 0.055 | 5.4455 | 2200 | 0.9812 | 0.5739 | 0.9812 | 0.9906 |
| 0.055 | 5.4505 | 2202 | 1.0372 | 0.5490 | 1.0372 | 1.0185 |
| 0.055 | 5.4554 | 2204 | 1.1008 | 0.5469 | 1.1008 | 1.0492 |
| 0.055 | 5.4604 | 2206 | 1.1741 | 0.5023 | 1.1741 | 1.0836 |
| 0.055 | 5.4653 | 2208 | 1.2203 | 0.5023 | 1.2203 | 1.1047 |
| 0.055 | 5.4703 | 2210 | 1.2617 | 0.4812 | 1.2617 | 1.1233 |
| 0.055 | 5.4752 | 2212 | 1.2752 | 0.4502 | 1.2752 | 1.1292 |
| 0.055 | 5.4802 | 2214 | 1.2815 | 0.4712 | 1.2815 | 1.1320 |
| 0.055 | 5.4851 | 2216 | 1.2591 | 0.4706 | 1.2591 | 1.1221 |
| 0.055 | 5.4901 | 2218 | 1.2383 | 0.4706 | 1.2383 | 1.1128 |
| 0.055 | 5.4950 | 2220 | 1.2167 | 0.4712 | 1.2167 | 1.1030 |
| 0.055 | 5.5 | 2222 | 1.1891 | 0.4712 | 1.1891 | 1.0905 |
| 0.055 | 5.5050 | 2224 | 1.1244 | 0.5047 | 1.1244 | 1.0604 |
| 0.055 | 5.5099 | 2226 | 1.0774 | 0.5274 | 1.0774 | 1.0380 |
| 0.055 | 5.5149 | 2228 | 1.0179 | 0.5274 | 1.0179 | 1.0089 |
| 0.055 | 5.5198 | 2230 | 0.9585 | 0.5210 | 0.9585 | 0.9790 |
| 0.055 | 5.5248 | 2232 | 0.9066 | 0.5604 | 0.9066 | 0.9522 |
| 0.055 | 5.5297 | 2234 | 0.8775 | 0.6029 | 0.8775 | 0.9368 |
| 0.055 | 5.5347 | 2236 | 0.8771 | 0.6029 | 0.8771 | 0.9365 |
| 0.055 | 5.5396 | 2238 | 0.8929 | 0.5263 | 0.8929 | 0.9449 |
| 0.055 | 5.5446 | 2240 | 0.9121 | 0.5101 | 0.9121 | 0.9550 |
| 0.055 | 5.5495 | 2242 | 0.9256 | 0.5087 | 0.9256 | 0.9621 |
| 0.055 | 5.5545 | 2244 | 0.9525 | 0.5468 | 0.9525 | 0.9760 |
| 0.055 | 5.5594 | 2246 | 0.9829 | 0.5468 | 0.9829 | 0.9914 |
| 0.055 | 5.5644 | 2248 | 1.0102 | 0.5468 | 1.0102 | 1.0051 |
| 0.055 | 5.5693 | 2250 | 1.0488 | 0.5241 | 1.0488 | 1.0241 |
| 0.055 | 5.5743 | 2252 | 1.0682 | 0.5241 | 1.0682 | 1.0335 |
| 0.055 | 5.5792 | 2254 | 1.0621 | 0.5241 | 1.0621 | 1.0306 |
| 0.055 | 5.5842 | 2256 | 1.0434 | 0.5241 | 1.0434 | 1.0215 |
| 0.055 | 5.5891 | 2258 | 1.0266 | 0.5468 | 1.0266 | 1.0132 |
| 0.055 | 5.5941 | 2260 | 1.0124 | 0.5468 | 1.0124 | 1.0062 |
| 0.055 | 5.5990 | 2262 | 1.0099 | 0.5274 | 1.0099 | 1.0049 |
| 0.055 | 5.6040 | 2264 | 1.0033 | 0.5879 | 1.0033 | 1.0016 |
| 0.055 | 5.6089 | 2266 | 0.9988 | 0.5879 | 0.9988 | 0.9994 |
| 0.055 | 5.6139 | 2268 | 0.9997 | 0.5879 | 0.9997 | 0.9998 |
| 0.055 | 5.6188 | 2270 | 1.0262 | 0.5413 | 1.0262 | 1.0130 |
| 0.055 | 5.6238 | 2272 | 1.0661 | 0.5393 | 1.0661 | 1.0325 |
| 0.055 | 5.6287 | 2274 | 1.1203 | 0.5356 | 1.1203 | 1.0584 |
| 0.055 | 5.6337 | 2276 | 1.1754 | 0.5356 | 1.1754 | 1.0841 |
| 0.055 | 5.6386 | 2278 | 1.2226 | 0.4923 | 1.2226 | 1.1057 |
| 0.055 | 5.6436 | 2280 | 1.2720 | 0.4923 | 1.2720 | 1.1278 |
| 0.055 | 5.6485 | 2282 | 1.3251 | 0.4717 | 1.3251 | 1.1511 |
| 0.055 | 5.6535 | 2284 | 1.3532 | 0.4717 | 1.3532 | 1.1633 |
| 0.055 | 5.6584 | 2286 | 1.3707 | 0.4717 | 1.3707 | 1.1708 |
| 0.055 | 5.6634 | 2288 | 1.3774 | 0.4474 | 1.3774 | 1.1736 |
| 0.055 | 5.6683 | 2290 | 1.4005 | 0.4474 | 1.4005 | 1.1834 |
| 0.055 | 5.6733 | 2292 | 1.4113 | 0.4474 | 1.4113 | 1.1880 |
| 0.055 | 5.6782 | 2294 | 1.4429 | 0.4451 | 1.4429 | 1.2012 |
| 0.055 | 5.6832 | 2296 | 1.4519 | 0.4451 | 1.4519 | 1.2050 |
| 0.055 | 5.6881 | 2298 | 1.4440 | 0.4451 | 1.4440 | 1.2017 |
| 0.055 | 5.6931 | 2300 | 1.4291 | 0.4451 | 1.4291 | 1.1954 |
| 0.055 | 5.6980 | 2302 | 1.4156 | 0.4584 | 1.4156 | 1.1898 |
| 0.055 | 5.7030 | 2304 | 1.4019 | 0.4584 | 1.4019 | 1.1840 |
| 0.055 | 5.7079 | 2306 | 1.3834 | 0.4584 | 1.3834 | 1.1762 |
| 0.055 | 5.7129 | 2308 | 1.3495 | 0.4717 | 1.3495 | 1.1617 |
| 0.055 | 5.7178 | 2310 | 1.3016 | 0.4717 | 1.3016 | 1.1409 |
| 0.055 | 5.7228 | 2312 | 1.2430 | 0.4923 | 1.2430 | 1.1149 |
| 0.055 | 5.7277 | 2314 | 1.2044 | 0.5135 | 1.2044 | 1.0974 |
| 0.055 | 5.7327 | 2316 | 1.1715 | 0.5157 | 1.1715 | 1.0823 |
| 0.055 | 5.7376 | 2318 | 1.1434 | 0.5157 | 1.1434 | 1.0693 |
| 0.055 | 5.7426 | 2320 | 1.1256 | 0.5157 | 1.1256 | 1.0610 |
| 0.055 | 5.7475 | 2322 | 1.1069 | 0.5268 | 1.1069 | 1.0521 |
| 0.055 | 5.7525 | 2324 | 1.0989 | 0.5268 | 1.0989 | 1.0483 |
| 0.055 | 5.7574 | 2326 | 1.1020 | 0.5157 | 1.1020 | 1.0497 |
| 0.055 | 5.7624 | 2328 | 1.1208 | 0.5157 | 1.1208 | 1.0587 |
| 0.055 | 5.7673 | 2330 | 1.1379 | 0.4753 | 1.1379 | 1.0667 |
| 0.055 | 5.7723 | 2332 | 1.1548 | 0.4828 | 1.1548 | 1.0746 |
| 0.055 | 5.7772 | 2334 | 1.1563 | 0.5023 | 1.1563 | 1.0753 |
| 0.055 | 5.7822 | 2336 | 1.1400 | 0.5157 | 1.1400 | 1.0677 |
| 0.055 | 5.7871 | 2338 | 1.1614 | 0.5356 | 1.1614 | 1.0777 |
| 0.055 | 5.7921 | 2340 | 1.1889 | 0.5356 | 1.1889 | 1.0904 |
| 0.055 | 5.7970 | 2342 | 1.1709 | 0.5356 | 1.1709 | 1.0821 |
| 0.055 | 5.8020 | 2344 | 1.1484 | 0.5157 | 1.1484 | 1.0716 |
| 0.055 | 5.8069 | 2346 | 1.1257 | 0.5157 | 1.1257 | 1.0610 |
| 0.055 | 5.8119 | 2348 | 1.0986 | 0.5157 | 1.0986 | 1.0482 |
| 0.055 | 5.8168 | 2350 | 1.0878 | 0.5157 | 1.0878 | 1.0430 |
| 0.055 | 5.8218 | 2352 | 1.0605 | 0.5186 | 1.0605 | 1.0298 |
| 0.055 | 5.8267 | 2354 | 1.0470 | 0.5186 | 1.0470 | 1.0232 |
| 0.055 | 5.8317 | 2356 | 1.0546 | 0.5186 | 1.0546 | 1.0269 |
| 0.055 | 5.8366 | 2358 | 1.0696 | 0.5186 | 1.0696 | 1.0342 |
| 0.055 | 5.8416 | 2360 | 1.0902 | 0.5356 | 1.0902 | 1.0441 |
| 0.055 | 5.8465 | 2362 | 1.1070 | 0.5356 | 1.1070 | 1.0521 |
| 0.055 | 5.8515 | 2364 | 1.1303 | 0.5356 | 1.1303 | 1.0631 |
| 0.055 | 5.8564 | 2366 | 1.0858 | 0.5356 | 1.0858 | 1.0420 |
| 0.055 | 5.8614 | 2368 | 1.0315 | 0.5285 | 1.0315 | 1.0156 |
| 0.055 | 5.8663 | 2370 | 0.9759 | 0.5833 | 0.9759 | 0.9879 |
| 0.055 | 5.8713 | 2372 | 0.9371 | 0.5833 | 0.9371 | 0.9680 |
| 0.055 | 5.8762 | 2374 | 0.9149 | 0.5861 | 0.9149 | 0.9565 |
| 0.055 | 5.8812 | 2376 | 0.8961 | 0.5673 | 0.8961 | 0.9466 |
| 0.055 | 5.8861 | 2378 | 0.8832 | 0.5673 | 0.8832 | 0.9398 |
| 0.055 | 5.8911 | 2380 | 0.8805 | 0.5861 | 0.8805 | 0.9384 |
| 0.055 | 5.8960 | 2382 | 0.8710 | 0.6029 | 0.8710 | 0.9333 |
| 0.055 | 5.9010 | 2384 | 0.8684 | 0.5937 | 0.8684 | 0.9319 |
| 0.055 | 5.9059 | 2386 | 0.8637 | 0.6110 | 0.8637 | 0.9294 |
| 0.055 | 5.9109 | 2388 | 0.8433 | 0.6196 | 0.8433 | 0.9183 |
| 0.055 | 5.9158 | 2390 | 0.8261 | 0.6214 | 0.8261 | 0.9089 |
| 0.055 | 5.9208 | 2392 | 0.8200 | 0.6447 | 0.8200 | 0.9055 |
| 0.055 | 5.9257 | 2394 | 0.8142 | 0.6447 | 0.8142 | 0.9023 |
| 0.055 | 5.9307 | 2396 | 0.8088 | 0.6447 | 0.8088 | 0.8994 |
| 0.055 | 5.9356 | 2398 | 0.8037 | 0.6650 | 0.8037 | 0.8965 |
| 0.055 | 5.9406 | 2400 | 0.8027 | 0.6650 | 0.8027 | 0.8959 |
| 0.055 | 5.9455 | 2402 | 0.8027 | 0.6650 | 0.8027 | 0.8960 |
| 0.055 | 5.9505 | 2404 | 0.7997 | 0.6650 | 0.7997 | 0.8942 |
| 0.055 | 5.9554 | 2406 | 0.8004 | 0.6488 | 0.8004 | 0.8947 |
| 0.055 | 5.9604 | 2408 | 0.8071 | 0.625 | 0.8071 | 0.8984 |
| 0.055 | 5.9653 | 2410 | 0.8221 | 0.5698 | 0.8221 | 0.9067 |
| 0.055 | 5.9703 | 2412 | 0.8299 | 0.5698 | 0.8299 | 0.9110 |
| 0.055 | 5.9752 | 2414 | 0.8180 | 0.5698 | 0.8180 | 0.9045 |
| 0.055 | 5.9802 | 2416 | 0.8125 | 0.5698 | 0.8125 | 0.9014 |
| 0.055 | 5.9851 | 2418 | 0.8004 | 0.5698 | 0.8004 | 0.8946 |
| 0.055 | 5.9901 | 2420 | 0.7900 | 0.5726 | 0.7900 | 0.8888 |
| 0.055 | 5.9950 | 2422 | 0.7832 | 0.6057 | 0.7832 | 0.8850 |
| 0.055 | 6.0 | 2424 | 0.7780 | 0.6269 | 0.7780 | 0.8820 |
| 0.055 | 6.0050 | 2426 | 0.7737 | 0.6269 | 0.7737 | 0.8796 |
| 0.055 | 6.0099 | 2428 | 0.7738 | 0.625 | 0.7738 | 0.8797 |
| 0.055 | 6.0149 | 2430 | 0.7730 | 0.6327 | 0.7730 | 0.8792 |
| 0.055 | 6.0198 | 2432 | 0.7738 | 0.6530 | 0.7738 | 0.8797 |
| 0.055 | 6.0248 | 2434 | 0.7764 | 0.6650 | 0.7764 | 0.8811 |
| 0.055 | 6.0297 | 2436 | 0.7780 | 0.6468 | 0.7780 | 0.8821 |
| 0.055 | 6.0347 | 2438 | 0.7798 | 0.6650 | 0.7798 | 0.8831 |
| 0.055 | 6.0396 | 2440 | 0.7819 | 0.6468 | 0.7819 | 0.8843 |
| 0.055 | 6.0446 | 2442 | 0.7840 | 0.6260 | 0.7840 | 0.8854 |
| 0.055 | 6.0495 | 2444 | 0.7794 | 0.604 | 0.7794 | 0.8829 |
| 0.055 | 6.0545 | 2446 | 0.7824 | 0.6092 | 0.7824 | 0.8845 |
| 0.055 | 6.0594 | 2448 | 0.7988 | 0.5930 | 0.7988 | 0.8937 |
| 0.055 | 6.0644 | 2450 | 0.8336 | 0.5878 | 0.8336 | 0.9130 |
| 0.055 | 6.0693 | 2452 | 0.8686 | 0.6230 | 0.8686 | 0.9320 |
| 0.055 | 6.0743 | 2454 | 0.8913 | 0.6230 | 0.8913 | 0.9441 |
| 0.055 | 6.0792 | 2456 | 0.9211 | 0.5866 | 0.9211 | 0.9598 |
| 0.055 | 6.0842 | 2458 | 0.9495 | 0.5329 | 0.9495 | 0.9744 |
| 0.055 | 6.0891 | 2460 | 0.9739 | 0.5203 | 0.9739 | 0.9869 |
| 0.055 | 6.0941 | 2462 | 0.9812 | 0.5410 | 0.9812 | 0.9906 |
| 0.055 | 6.0990 | 2464 | 0.9945 | 0.5610 | 0.9945 | 0.9973 |
| 0.055 | 6.1040 | 2466 | 1.0080 | 0.5610 | 1.0080 | 1.0040 |
| 0.055 | 6.1089 | 2468 | 1.0322 | 0.5613 | 1.0322 | 1.0160 |
| 0.055 | 6.1139 | 2470 | 1.0104 | 0.5408 | 1.0104 | 1.0052 |
| 0.055 | 6.1188 | 2472 | 0.9619 | 0.5931 | 0.9619 | 0.9807 |
| 0.055 | 6.1238 | 2474 | 0.9522 | 0.6061 | 0.9522 | 0.9758 |
| 0.055 | 6.1287 | 2476 | 0.9513 | 0.6061 | 0.9513 | 0.9754 |
| 0.055 | 6.1337 | 2478 | 0.9394 | 0.6061 | 0.9394 | 0.9692 |
| 0.055 | 6.1386 | 2480 | 0.9199 | 0.5854 | 0.9199 | 0.9591 |
| 0.055 | 6.1436 | 2482 | 0.9043 | 0.5854 | 0.9043 | 0.9509 |
| 0.055 | 6.1485 | 2484 | 0.8908 | 0.5876 | 0.8908 | 0.9438 |
| 0.055 | 6.1535 | 2486 | 0.8693 | 0.5876 | 0.8693 | 0.9324 |
| 0.055 | 6.1584 | 2488 | 0.8534 | 0.625 | 0.8534 | 0.9238 |
| 0.055 | 6.1634 | 2490 | 0.8508 | 0.625 | 0.8508 | 0.9224 |
| 0.055 | 6.1683 | 2492 | 0.8525 | 0.625 | 0.8525 | 0.9233 |
| 0.055 | 6.1733 | 2494 | 0.8532 | 0.6192 | 0.8532 | 0.9237 |
| 0.055 | 6.1782 | 2496 | 0.8512 | 0.6382 | 0.8512 | 0.9226 |
| 0.055 | 6.1832 | 2498 | 0.8465 | 0.6278 | 0.8465 | 0.9201 |
| 0.0495 | 6.1881 | 2500 | 0.8247 | 0.5746 | 0.8247 | 0.9081 |
| 0.0495 | 6.1931 | 2502 | 0.8306 | 0.5939 | 0.8306 | 0.9114 |
| 0.0495 | 6.1980 | 2504 | 0.8412 | 0.5939 | 0.8412 | 0.9172 |
| 0.0495 | 6.2030 | 2506 | 0.8395 | 0.5939 | 0.8395 | 0.9163 |
| 0.0495 | 6.2079 | 2508 | 0.8364 | 0.5939 | 0.8364 | 0.9146 |
| 0.0495 | 6.2129 | 2510 | 0.8386 | 0.5939 | 0.8386 | 0.9158 |
| 0.0495 | 6.2178 | 2512 | 0.8440 | 0.5939 | 0.8440 | 0.9187 |
| 0.0495 | 6.2228 | 2514 | 0.8535 | 0.6175 | 0.8535 | 0.9239 |
| 0.0495 | 6.2277 | 2516 | 0.8665 | 0.6121 | 0.8665 | 0.9309 |
| 0.0495 | 6.2327 | 2518 | 0.8688 | 0.6121 | 0.8688 | 0.9321 |
| 0.0495 | 6.2376 | 2520 | 0.8792 | 0.6121 | 0.8792 | 0.9377 |
| 0.0495 | 6.2426 | 2522 | 0.8824 | 0.6121 | 0.8824 | 0.9394 |
| 0.0495 | 6.2475 | 2524 | 0.8798 | 0.6382 | 0.8798 | 0.9380 |
| 0.0495 | 6.2525 | 2526 | 0.8774 | 0.6382 | 0.8774 | 0.9367 |
| 0.0495 | 6.2574 | 2528 | 0.8756 | 0.6382 | 0.8756 | 0.9358 |
| 0.0495 | 6.2624 | 2530 | 0.8870 | 0.6382 | 0.8870 | 0.9418 |
| 0.0495 | 6.2673 | 2532 | 0.8884 | 0.6382 | 0.8884 | 0.9425 |
| 0.0495 | 6.2723 | 2534 | 0.8761 | 0.6278 | 0.8761 | 0.9360 |
| 0.0495 | 6.2772 | 2536 | 0.8607 | 0.6278 | 0.8607 | 0.9277 |
| 0.0495 | 6.2822 | 2538 | 0.8362 | 0.5746 | 0.8362 | 0.9144 |
| 0.0495 | 6.2871 | 2540 | 0.8322 | 0.5746 | 0.8322 | 0.9122 |
| 0.0495 | 6.2921 | 2542 | 0.8231 | 0.5767 | 0.8231 | 0.9072 |
| 0.0495 | 6.2970 | 2544 | 0.8191 | 0.5774 | 0.8191 | 0.9051 |
| 0.0495 | 6.3020 | 2546 | 0.8252 | 0.5746 | 0.8252 | 0.9084 |
| 0.0495 | 6.3069 | 2548 | 0.8311 | 0.5746 | 0.8311 | 0.9117 |
| 0.0495 | 6.3119 | 2550 | 0.8467 | 0.5848 | 0.8467 | 0.9201 |
| 0.0495 | 6.3168 | 2552 | 0.8829 | 0.6382 | 0.8829 | 0.9397 |
| 0.0495 | 6.3218 | 2554 | 0.9177 | 0.6382 | 0.9177 | 0.9579 |
| 0.0495 | 6.3267 | 2556 | 0.9343 | 0.5870 | 0.9343 | 0.9666 |
| 0.0495 | 6.3317 | 2558 | 0.9408 | 0.5870 | 0.9408 | 0.9700 |
| 0.0495 | 6.3366 | 2560 | 0.9438 | 0.6055 | 0.9438 | 0.9715 |
| 0.0495 | 6.3416 | 2562 | 0.9511 | 0.5870 | 0.9511 | 0.9752 |
| 0.0495 | 6.3465 | 2564 | 0.9689 | 0.5870 | 0.9689 | 0.9843 |
| 0.0495 | 6.3515 | 2566 | 0.9881 | 0.5841 | 0.9881 | 0.9940 |
| 0.0495 | 6.3564 | 2568 | 0.9935 | 0.5841 | 0.9935 | 0.9968 |
| 0.0495 | 6.3614 | 2570 | 1.0177 | 0.5511 | 1.0177 | 1.0088 |
| 0.0495 | 6.3663 | 2572 | 1.0282 | 0.5699 | 1.0282 | 1.0140 |
| 0.0495 | 6.3713 | 2574 | 1.0193 | 0.5699 | 1.0193 | 1.0096 |
| 0.0495 | 6.3762 | 2576 | 0.9941 | 0.6055 | 0.9941 | 0.9971 |
| 0.0495 | 6.3812 | 2578 | 0.9714 | 0.6055 | 0.9714 | 0.9856 |
| 0.0495 | 6.3861 | 2580 | 0.9441 | 0.5833 | 0.9441 | 0.9717 |
| 0.0495 | 6.3911 | 2582 | 0.9351 | 0.5833 | 0.9351 | 0.9670 |
| 0.0495 | 6.3960 | 2584 | 0.9222 | 0.5833 | 0.9222 | 0.9603 |
| 0.0495 | 6.4010 | 2586 | 0.9203 | 0.5833 | 0.9203 | 0.9593 |
| 0.0495 | 6.4059 | 2588 | 0.9266 | 0.5649 | 0.9266 | 0.9626 |
| 0.0495 | 6.4109 | 2590 | 0.9329 | 0.5870 | 0.9329 | 0.9659 |
| 0.0495 | 6.4158 | 2592 | 0.9482 | 0.5870 | 0.9482 | 0.9738 |
| 0.0495 | 6.4208 | 2594 | 0.9593 | 0.5870 | 0.9593 | 0.9795 |
| 0.0495 | 6.4257 | 2596 | 0.9708 | 0.5870 | 0.9708 | 0.9853 |
| 0.0495 | 6.4307 | 2598 | 0.9845 | 0.5699 | 0.9845 | 0.9922 |
| 0.0495 | 6.4356 | 2600 | 0.9959 | 0.5699 | 0.9959 | 0.9980 |
| 0.0495 | 6.4406 | 2602 | 1.0114 | 0.5699 | 1.0114 | 1.0057 |
| 0.0495 | 6.4455 | 2604 | 0.9959 | 0.5511 | 0.9959 | 0.9980 |
| 0.0495 | 6.4505 | 2606 | 0.9863 | 0.5511 | 0.9863 | 0.9931 |
| 0.0495 | 6.4554 | 2608 | 0.9726 | 0.5511 | 0.9726 | 0.9862 |
| 0.0495 | 6.4604 | 2610 | 0.9594 | 0.5511 | 0.9594 | 0.9795 |
| 0.0495 | 6.4653 | 2612 | 0.9457 | 0.5652 | 0.9457 | 0.9725 |
| 0.0495 | 6.4703 | 2614 | 0.9365 | 0.5841 | 0.9365 | 0.9677 |
| 0.0495 | 6.4752 | 2616 | 0.9369 | 0.5841 | 0.9369 | 0.9680 |
| 0.0495 | 6.4802 | 2618 | 0.9332 | 0.5841 | 0.9332 | 0.9660 |
| 0.0495 | 6.4851 | 2620 | 0.9341 | 0.5841 | 0.9341 | 0.9665 |
| 0.0495 | 6.4901 | 2622 | 0.9311 | 0.5870 | 0.9311 | 0.9649 |
| 0.0495 | 6.4950 | 2624 | 0.9224 | 0.5649 | 0.9224 | 0.9604 |
| 0.0495 | 6.5 | 2626 | 0.9229 | 0.5649 | 0.9229 | 0.9607 |
| 0.0495 | 6.5050 | 2628 | 0.9192 | 0.5649 | 0.9192 | 0.9588 |
| 0.0495 | 6.5099 | 2630 | 0.9193 | 0.5649 | 0.9193 | 0.9588 |
| 0.0495 | 6.5149 | 2632 | 0.9233 | 0.5649 | 0.9233 | 0.9609 |
| 0.0495 | 6.5198 | 2634 | 0.9170 | 0.5649 | 0.9170 | 0.9576 |
| 0.0495 | 6.5248 | 2636 | 0.9160 | 0.5870 | 0.9160 | 0.9571 |
| 0.0495 | 6.5297 | 2638 | 0.9212 | 0.5870 | 0.9212 | 0.9598 |
| 0.0495 | 6.5347 | 2640 | 0.9248 | 0.5870 | 0.9248 | 0.9617 |
| 0.0495 | 6.5396 | 2642 | 0.9350 | 0.5870 | 0.9350 | 0.9669 |
| 0.0495 | 6.5446 | 2644 | 0.9690 | 0.5699 | 0.9690 | 0.9844 |
| 0.0495 | 6.5495 | 2646 | 1.0132 | 0.5699 | 1.0132 | 1.0066 |
| 0.0495 | 6.5545 | 2648 | 1.0369 | 0.5699 | 1.0369 | 1.0183 |
| 0.0495 | 6.5594 | 2650 | 1.0566 | 0.5699 | 1.0566 | 1.0279 |
| 0.0495 | 6.5644 | 2652 | 1.0615 | 0.5584 | 1.0615 | 1.0303 |
| 0.0495 | 6.5693 | 2654 | 1.0561 | 0.5584 | 1.0561 | 1.0277 |
| 0.0495 | 6.5743 | 2656 | 1.0434 | 0.5393 | 1.0434 | 1.0215 |
| 0.0495 | 6.5792 | 2658 | 1.0434 | 0.5393 | 1.0434 | 1.0215 |
| 0.0495 | 6.5842 | 2660 | 1.0405 | 0.5274 | 1.0405 | 1.0200 |
| 0.0495 | 6.5891 | 2662 | 1.0390 | 0.5269 | 1.0390 | 1.0193 |
| 0.0495 | 6.5941 | 2664 | 1.0448 | 0.5269 | 1.0448 | 1.0221 |
| 0.0495 | 6.5990 | 2666 | 1.0487 | 0.5269 | 1.0487 | 1.0241 |
| 0.0495 | 6.6040 | 2668 | 1.0447 | 0.5393 | 1.0447 | 1.0221 |
| 0.0495 | 6.6089 | 2670 | 1.0416 | 0.5393 | 1.0416 | 1.0206 |
| 0.0495 | 6.6139 | 2672 | 1.0284 | 0.5511 | 1.0284 | 1.0141 |
| 0.0495 | 6.6188 | 2674 | 1.0141 | 0.5511 | 1.0141 | 1.0070 |
| 0.0495 | 6.6238 | 2676 | 1.0013 | 0.5511 | 1.0013 | 1.0006 |
| 0.0495 | 6.6287 | 2678 | 0.9901 | 0.5511 | 0.9901 | 0.9950 |
| 0.0495 | 6.6337 | 2680 | 0.9887 | 0.5511 | 0.9887 | 0.9944 |
| 0.0495 | 6.6386 | 2682 | 0.9765 | 0.5699 | 0.9765 | 0.9882 |
| 0.0495 | 6.6436 | 2684 | 0.9632 | 0.5699 | 0.9632 | 0.9814 |
| 0.0495 | 6.6485 | 2686 | 0.9602 | 0.5699 | 0.9602 | 0.9799 |
| 0.0495 | 6.6535 | 2688 | 0.9589 | 0.6023 | 0.9589 | 0.9792 |
| 0.0495 | 6.6584 | 2690 | 0.9571 | 0.6023 | 0.9571 | 0.9783 |
| 0.0495 | 6.6634 | 2692 | 0.9479 | 0.5841 | 0.9479 | 0.9736 |
| 0.0495 | 6.6683 | 2694 | 0.9335 | 0.5841 | 0.9335 | 0.9662 |
| 0.0495 | 6.6733 | 2696 | 0.9205 | 0.5870 | 0.9205 | 0.9594 |
| 0.0495 | 6.6782 | 2698 | 0.9076 | 0.6121 | 0.9076 | 0.9527 |
| 0.0495 | 6.6832 | 2700 | 0.8918 | 0.6020 | 0.8918 | 0.9444 |
| 0.0495 | 6.6881 | 2702 | 0.8800 | 0.6020 | 0.8800 | 0.9381 |
| 0.0495 | 6.6931 | 2704 | 0.8721 | 0.6278 | 0.8721 | 0.9339 |
| 0.0495 | 6.6980 | 2706 | 0.8674 | 0.6278 | 0.8674 | 0.9313 |
| 0.0495 | 6.7030 | 2708 | 0.8890 | 0.6121 | 0.8890 | 0.9429 |
| 0.0495 | 6.7079 | 2710 | 0.9117 | 0.5533 | 0.9117 | 0.9548 |
| 0.0495 | 6.7129 | 2712 | 0.9507 | 0.5699 | 0.9507 | 0.9750 |
| 0.0495 | 6.7178 | 2714 | 0.9835 | 0.5699 | 0.9835 | 0.9917 |
| 0.0495 | 6.7228 | 2716 | 1.0269 | 0.5586 | 1.0269 | 1.0134 |
| 0.0495 | 6.7277 | 2718 | 1.0605 | 0.5269 | 1.0605 | 1.0298 |
| 0.0495 | 6.7327 | 2720 | 1.0848 | 0.5263 | 1.0848 | 1.0415 |
| 0.0495 | 6.7376 | 2722 | 1.0838 | 0.5263 | 1.0838 | 1.0410 |
| 0.0495 | 6.7426 | 2724 | 1.1002 | 0.5263 | 1.1002 | 1.0489 |
| 0.0495 | 6.7475 | 2726 | 1.1103 | 0.5467 | 1.1103 | 1.0537 |
| 0.0495 | 6.7525 | 2728 | 1.1237 | 0.5467 | 1.1237 | 1.0600 |
| 0.0495 | 6.7574 | 2730 | 1.1273 | 0.5467 | 1.1273 | 1.0618 |
| 0.0495 | 6.7624 | 2732 | 1.1225 | 0.5467 | 1.1225 | 1.0595 |
| 0.0495 | 6.7673 | 2734 | 1.1193 | 0.5467 | 1.1193 | 1.0580 |
| 0.0495 | 6.7723 | 2736 | 1.1021 | 0.5588 | 1.1021 | 1.0498 |
| 0.0495 | 6.7772 | 2738 | 1.0721 | 0.5588 | 1.0721 | 1.0354 |
| 0.0495 | 6.7822 | 2740 | 1.0398 | 0.5388 | 1.0398 | 1.0197 |
| 0.0495 | 6.7871 | 2742 | 1.0064 | 0.5388 | 1.0064 | 1.0032 |
| 0.0495 | 6.7921 | 2744 | 0.9918 | 0.5410 | 0.9918 | 0.9959 |
| 0.0495 | 6.7970 | 2746 | 0.9933 | 0.5393 | 0.9933 | 0.9966 |
| 0.0495 | 6.8020 | 2748 | 0.9945 | 0.5699 | 0.9945 | 0.9973 |
| 0.0495 | 6.8069 | 2750 | 0.9993 | 0.5699 | 0.9993 | 0.9997 |
| 0.0495 | 6.8119 | 2752 | 0.9998 | 0.5699 | 0.9998 | 0.9999 |
| 0.0495 | 6.8168 | 2754 | 0.9991 | 0.5699 | 0.9991 | 0.9996 |
| 0.0495 | 6.8218 | 2756 | 1.0029 | 0.5699 | 1.0029 | 1.0015 |
| 0.0495 | 6.8267 | 2758 | 0.9931 | 0.5699 | 0.9931 | 0.9965 |
| 0.0495 | 6.8317 | 2760 | 0.9800 | 0.5724 | 0.9800 | 0.9900 |
| 0.0495 | 6.8366 | 2762 | 0.9660 | 0.5724 | 0.9660 | 0.9829 |
| 0.0495 | 6.8416 | 2764 | 0.9496 | 0.5724 | 0.9496 | 0.9745 |
| 0.0495 | 6.8465 | 2766 | 0.9403 | 0.5724 | 0.9403 | 0.9697 |
| 0.0495 | 6.8515 | 2768 | 0.9319 | 0.5724 | 0.9319 | 0.9654 |
| 0.0495 | 6.8564 | 2770 | 0.9149 | 0.6055 | 0.9149 | 0.9565 |
| 0.0495 | 6.8614 | 2772 | 0.9017 | 0.6055 | 0.9017 | 0.9496 |
| 0.0495 | 6.8663 | 2774 | 0.9018 | 0.6055 | 0.9018 | 0.9496 |
| 0.0495 | 6.8713 | 2776 | 0.8925 | 0.6055 | 0.8925 | 0.9447 |
| 0.0495 | 6.8762 | 2778 | 0.8882 | 0.6055 | 0.8882 | 0.9425 |
| 0.0495 | 6.8812 | 2780 | 0.8766 | 0.6055 | 0.8766 | 0.9363 |
| 0.0495 | 6.8861 | 2782 | 0.8730 | 0.6055 | 0.8730 | 0.9343 |
| 0.0495 | 6.8911 | 2784 | 0.8828 | 0.6055 | 0.8828 | 0.9396 |
| 0.0495 | 6.8960 | 2786 | 0.8982 | 0.6055 | 0.8982 | 0.9477 |
| 0.0495 | 6.9010 | 2788 | 0.9029 | 0.6055 | 0.9029 | 0.9502 |
| 0.0495 | 6.9059 | 2790 | 0.9070 | 0.6055 | 0.9070 | 0.9524 |
| 0.0495 | 6.9109 | 2792 | 0.9176 | 0.6055 | 0.9176 | 0.9579 |
| 0.0495 | 6.9158 | 2794 | 0.9174 | 0.6055 | 0.9174 | 0.9578 |
| 0.0495 | 6.9208 | 2796 | 0.9187 | 0.6055 | 0.9187 | 0.9585 |
| 0.0495 | 6.9257 | 2798 | 0.9151 | 0.6055 | 0.9151 | 0.9566 |
| 0.0495 | 6.9307 | 2800 | 0.9081 | 0.6055 | 0.9081 | 0.9530 |
| 0.0495 | 6.9356 | 2802 | 0.8828 | 0.6055 | 0.8828 | 0.9396 |
| 0.0495 | 6.9406 | 2804 | 0.8759 | 0.6055 | 0.8759 | 0.9359 |
| 0.0495 | 6.9455 | 2806 | 0.8865 | 0.6055 | 0.8865 | 0.9415 |
| 0.0495 | 6.9505 | 2808 | 0.8952 | 0.6055 | 0.8952 | 0.9461 |
| 0.0495 | 6.9554 | 2810 | 0.8896 | 0.6055 | 0.8896 | 0.9432 |
| 0.0495 | 6.9604 | 2812 | 0.8821 | 0.6055 | 0.8821 | 0.9392 |
| 0.0495 | 6.9653 | 2814 | 0.8701 | 0.6055 | 0.8701 | 0.9328 |
| 0.0495 | 6.9703 | 2816 | 0.8546 | 0.5957 | 0.8546 | 0.9244 |
| 0.0495 | 6.9752 | 2818 | 0.8391 | 0.5957 | 0.8391 | 0.9160 |
| 0.0495 | 6.9802 | 2820 | 0.8352 | 0.5861 | 0.8352 | 0.9139 |
| 0.0495 | 6.9851 | 2822 | 0.8402 | 0.5861 | 0.8402 | 0.9166 |
| 0.0495 | 6.9901 | 2824 | 0.8507 | 0.5861 | 0.8507 | 0.9224 |
| 0.0495 | 6.9950 | 2826 | 0.8585 | 0.5861 | 0.8585 | 0.9266 |
| 0.0495 | 7.0 | 2828 | 0.8660 | 0.5861 | 0.8660 | 0.9306 |
| 0.0495 | 7.0050 | 2830 | 0.8474 | 0.5759 | 0.8474 | 0.9205 |
| 0.0495 | 7.0099 | 2832 | 0.8325 | 0.5955 | 0.8325 | 0.9124 |
| 0.0495 | 7.0149 | 2834 | 0.8140 | 0.6434 | 0.8140 | 0.9022 |
| 0.0495 | 7.0198 | 2836 | 0.7999 | 0.6259 | 0.7999 | 0.8944 |
| 0.0495 | 7.0248 | 2838 | 0.7882 | 0.6259 | 0.7882 | 0.8878 |
| 0.0495 | 7.0297 | 2840 | 0.7857 | 0.6295 | 0.7857 | 0.8864 |
| 0.0495 | 7.0347 | 2842 | 0.7821 | 0.6092 | 0.7821 | 0.8844 |
| 0.0495 | 7.0396 | 2844 | 0.7797 | 0.6092 | 0.7797 | 0.8830 |
| 0.0495 | 7.0446 | 2846 | 0.7844 | 0.6092 | 0.7844 | 0.8856 |
| 0.0495 | 7.0495 | 2848 | 0.7895 | 0.6092 | 0.7895 | 0.8886 |
| 0.0495 | 7.0545 | 2850 | 0.7954 | 0.6058 | 0.7954 | 0.8919 |
| 0.0495 | 7.0594 | 2852 | 0.8196 | 0.6241 | 0.8196 | 0.9053 |
| 0.0495 | 7.0644 | 2854 | 0.8474 | 0.5976 | 0.8474 | 0.9205 |
| 0.0495 | 7.0693 | 2856 | 0.8601 | 0.5861 | 0.8601 | 0.9274 |
| 0.0495 | 7.0743 | 2858 | 0.8676 | 0.5861 | 0.8676 | 0.9315 |
| 0.0495 | 7.0792 | 2860 | 0.8677 | 0.5645 | 0.8677 | 0.9315 |
| 0.0495 | 7.0842 | 2862 | 0.8668 | 0.5645 | 0.8668 | 0.9310 |
| 0.0495 | 7.0891 | 2864 | 0.8559 | 0.5645 | 0.8559 | 0.9252 |
| 0.0495 | 7.0941 | 2866 | 0.8479 | 0.5645 | 0.8479 | 0.9208 |
| 0.0495 | 7.0990 | 2868 | 0.8475 | 0.5553 | 0.8475 | 0.9206 |
| 0.0495 | 7.1040 | 2870 | 0.8514 | 0.5649 | 0.8514 | 0.9227 |
| 0.0495 | 7.1089 | 2872 | 0.8629 | 0.5833 | 0.8629 | 0.9289 |
| 0.0495 | 7.1139 | 2874 | 0.8626 | 0.5833 | 0.8626 | 0.9288 |
| 0.0495 | 7.1188 | 2876 | 0.8560 | 0.5833 | 0.8560 | 0.9252 |
| 0.0495 | 7.1238 | 2878 | 0.8537 | 0.5833 | 0.8537 | 0.9240 |
| 0.0495 | 7.1287 | 2880 | 0.8541 | 0.5833 | 0.8541 | 0.9242 |
| 0.0495 | 7.1337 | 2882 | 0.8478 | 0.5833 | 0.8478 | 0.9207 |
| 0.0495 | 7.1386 | 2884 | 0.8395 | 0.5889 | 0.8395 | 0.9162 |
| 0.0495 | 7.1436 | 2886 | 0.8377 | 0.5889 | 0.8377 | 0.9153 |
| 0.0495 | 7.1485 | 2888 | 0.8341 | 0.5889 | 0.8341 | 0.9133 |
| 0.0495 | 7.1535 | 2890 | 0.8283 | 0.6140 | 0.8283 | 0.9101 |
| 0.0495 | 7.1584 | 2892 | 0.8281 | 0.6140 | 0.8281 | 0.9100 |
| 0.0495 | 7.1634 | 2894 | 0.8307 | 0.6140 | 0.8307 | 0.9115 |
| 0.0495 | 7.1683 | 2896 | 0.8311 | 0.6140 | 0.8311 | 0.9116 |
| 0.0495 | 7.1733 | 2898 | 0.8323 | 0.6140 | 0.8323 | 0.9123 |
| 0.0495 | 7.1782 | 2900 | 0.8387 | 0.5889 | 0.8387 | 0.9158 |
| 0.0495 | 7.1832 | 2902 | 0.8357 | 0.6140 | 0.8357 | 0.9142 |
| 0.0495 | 7.1881 | 2904 | 0.8260 | 0.6140 | 0.8260 | 0.9088 |
| 0.0495 | 7.1931 | 2906 | 0.8158 | 0.6039 | 0.8158 | 0.9032 |
| 0.0495 | 7.1980 | 2908 | 0.8060 | 0.6058 | 0.8060 | 0.8978 |
| 0.0495 | 7.2030 | 2910 | 0.8014 | 0.6058 | 0.8014 | 0.8952 |
| 0.0495 | 7.2079 | 2912 | 0.7985 | 0.6259 | 0.7985 | 0.8936 |
| 0.0495 | 7.2129 | 2914 | 0.8048 | 0.6259 | 0.8048 | 0.8971 |
| 0.0495 | 7.2178 | 2916 | 0.8087 | 0.6259 | 0.8087 | 0.8993 |
| 0.0495 | 7.2228 | 2918 | 0.8086 | 0.6259 | 0.8086 | 0.8992 |
| 0.0495 | 7.2277 | 2920 | 0.8084 | 0.6259 | 0.8084 | 0.8991 |
| 0.0495 | 7.2327 | 2922 | 0.8224 | 0.6039 | 0.8224 | 0.9069 |
| 0.0495 | 7.2376 | 2924 | 0.8321 | 0.6140 | 0.8321 | 0.9122 |
| 0.0495 | 7.2426 | 2926 | 0.8393 | 0.6192 | 0.8393 | 0.9161 |
| 0.0495 | 7.2475 | 2928 | 0.8457 | 0.6192 | 0.8457 | 0.9196 |
| 0.0495 | 7.2525 | 2930 | 0.8526 | 0.6192 | 0.8526 | 0.9234 |
| 0.0495 | 7.2574 | 2932 | 0.8620 | 0.6382 | 0.8620 | 0.9284 |
| 0.0495 | 7.2624 | 2934 | 0.8748 | 0.6382 | 0.8748 | 0.9353 |
| 0.0495 | 7.2673 | 2936 | 0.8864 | 0.6382 | 0.8864 | 0.9415 |
| 0.0495 | 7.2723 | 2938 | 0.9033 | 0.6121 | 0.9033 | 0.9504 |
| 0.0495 | 7.2772 | 2940 | 0.9363 | 0.5778 | 0.9363 | 0.9676 |
| 0.0495 | 7.2822 | 2942 | 0.9814 | 0.5704 | 0.9814 | 0.9907 |
| 0.0495 | 7.2871 | 2944 | 1.0205 | 0.5709 | 1.0205 | 1.0102 |
| 0.0495 | 7.2921 | 2946 | 1.0541 | 0.5588 | 1.0541 | 1.0267 |
| 0.0495 | 7.2970 | 2948 | 1.0823 | 0.5588 | 1.0823 | 1.0403 |
| 0.0495 | 7.3020 | 2950 | 1.0930 | 0.5118 | 1.0930 | 1.0454 |
| 0.0495 | 7.3069 | 2952 | 1.0941 | 0.5 | 1.0941 | 1.0460 |
| 0.0495 | 7.3119 | 2954 | 1.0893 | 0.5349 | 1.0893 | 1.0437 |
| 0.0495 | 7.3168 | 2956 | 1.0838 | 0.5588 | 1.0838 | 1.0411 |
| 0.0495 | 7.3218 | 2958 | 1.0693 | 0.5588 | 1.0693 | 1.0341 |
| 0.0495 | 7.3267 | 2960 | 1.0585 | 0.5388 | 1.0585 | 1.0288 |
| 0.0495 | 7.3317 | 2962 | 1.0486 | 0.5388 | 1.0486 | 1.0240 |
| 0.0495 | 7.3366 | 2964 | 1.0430 | 0.5511 | 1.0430 | 1.0213 |
| 0.0495 | 7.3416 | 2966 | 1.0308 | 0.5511 | 1.0308 | 1.0153 |
| 0.0495 | 7.3465 | 2968 | 1.0206 | 0.5704 | 1.0206 | 1.0103 |
| 0.0495 | 7.3515 | 2970 | 1.0251 | 0.5704 | 1.0251 | 1.0125 |
| 0.0495 | 7.3564 | 2972 | 1.0394 | 0.5704 | 1.0394 | 1.0195 |
| 0.0495 | 7.3614 | 2974 | 1.0497 | 0.5704 | 1.0497 | 1.0246 |
| 0.0495 | 7.3663 | 2976 | 1.0571 | 0.5704 | 1.0571 | 1.0282 |
| 0.0495 | 7.3713 | 2978 | 1.0640 | 0.5704 | 1.0640 | 1.0315 |
| 0.0495 | 7.3762 | 2980 | 1.0591 | 0.5704 | 1.0591 | 1.0291 |
| 0.0495 | 7.3812 | 2982 | 1.0491 | 0.5704 | 1.0491 | 1.0242 |
| 0.0495 | 7.3861 | 2984 | 1.0414 | 0.5704 | 1.0414 | 1.0205 |
| 0.0495 | 7.3911 | 2986 | 1.0366 | 0.5704 | 1.0366 | 1.0182 |
| 0.0495 | 7.3960 | 2988 | 1.0306 | 0.5511 | 1.0306 | 1.0152 |
| 0.0495 | 7.4010 | 2990 | 1.0234 | 0.5511 | 1.0234 | 1.0116 |
| 0.0495 | 7.4059 | 2992 | 1.0228 | 0.5511 | 1.0228 | 1.0114 |
| 0.0495 | 7.4109 | 2994 | 1.0308 | 0.5511 | 1.0308 | 1.0153 |
| 0.0495 | 7.4158 | 2996 | 1.0287 | 0.5511 | 1.0287 | 1.0142 |
| 0.0495 | 7.4208 | 2998 | 1.0232 | 0.5511 | 1.0232 | 1.0115 |
| 0.0419 | 7.4257 | 3000 | 1.0243 | 0.5511 | 1.0243 | 1.0121 |
| 0.0419 | 7.4307 | 3002 | 1.0278 | 0.5511 | 1.0278 | 1.0138 |
| 0.0419 | 7.4356 | 3004 | 1.0313 | 0.5511 | 1.0313 | 1.0155 |
| 0.0419 | 7.4406 | 3006 | 1.0320 | 0.5511 | 1.0320 | 1.0159 |
| 0.0419 | 7.4455 | 3008 | 1.0261 | 0.5511 | 1.0261 | 1.0130 |
| 0.0419 | 7.4505 | 3010 | 1.0239 | 0.5511 | 1.0239 | 1.0119 |
| 0.0419 | 7.4554 | 3012 | 1.0251 | 0.5511 | 1.0251 | 1.0125 |
| 0.0419 | 7.4604 | 3014 | 1.0293 | 0.5511 | 1.0293 | 1.0145 |
| 0.0419 | 7.4653 | 3016 | 1.0405 | 0.5511 | 1.0405 | 1.0200 |
| 0.0419 | 7.4703 | 3018 | 1.0511 | 0.5388 | 1.0511 | 1.0252 |
| 0.0419 | 7.4752 | 3020 | 1.0624 | 0.5388 | 1.0624 | 1.0307 |
| 0.0419 | 7.4802 | 3022 | 1.0857 | 0.5388 | 1.0857 | 1.0420 |
| 0.0419 | 7.4851 | 3024 | 1.0980 | 0.5388 | 1.0980 | 1.0479 |
| 0.0419 | 7.4901 | 3026 | 1.0998 | 0.5588 | 1.0998 | 1.0487 |
| 0.0419 | 7.4950 | 3028 | 1.1160 | 0.5349 | 1.1160 | 1.0564 |
| 0.0419 | 7.5 | 3030 | 1.1468 | 0.5118 | 1.1468 | 1.0709 |
| 0.0419 | 7.5050 | 3032 | 1.1705 | 0.5118 | 1.1705 | 1.0819 |
| 0.0419 | 7.5099 | 3034 | 1.1999 | 0.5 | 1.1999 | 1.0954 |
| 0.0419 | 7.5149 | 3036 | 1.2289 | 0.5 | 1.2289 | 1.1086 |
| 0.0419 | 7.5198 | 3038 | 1.2531 | 0.5 | 1.2531 | 1.1194 |
| 0.0419 | 7.5248 | 3040 | 1.2780 | 0.4882 | 1.2780 | 1.1305 |
| 0.0419 | 7.5297 | 3042 | 1.2985 | 0.4082 | 1.2985 | 1.1395 |
| 0.0419 | 7.5347 | 3044 | 1.3174 | 0.4093 | 1.3174 | 1.1478 |
| 0.0419 | 7.5396 | 3046 | 1.3308 | 0.4093 | 1.3308 | 1.1536 |
| 0.0419 | 7.5446 | 3048 | 1.3423 | 0.4093 | 1.3423 | 1.1586 |
| 0.0419 | 7.5495 | 3050 | 1.3423 | 0.4093 | 1.3423 | 1.1586 |
| 0.0419 | 7.5545 | 3052 | 1.3332 | 0.4214 | 1.3332 | 1.1547 |
| 0.0419 | 7.5594 | 3054 | 1.3088 | 0.4214 | 1.3088 | 1.1440 |
| 0.0419 | 7.5644 | 3056 | 1.2684 | 0.4780 | 1.2684 | 1.1262 |
| 0.0419 | 7.5693 | 3058 | 1.2443 | 0.5 | 1.2443 | 1.1155 |
| 0.0419 | 7.5743 | 3060 | 1.2321 | 0.5 | 1.2321 | 1.1100 |
| 0.0419 | 7.5792 | 3062 | 1.2120 | 0.4795 | 1.2120 | 1.1009 |
| 0.0419 | 7.5842 | 3064 | 1.1941 | 0.5025 | 1.1941 | 1.0928 |
| 0.0419 | 7.5891 | 3066 | 1.1659 | 0.5025 | 1.1659 | 1.0798 |
| 0.0419 | 7.5941 | 3068 | 1.1435 | 0.5349 | 1.1435 | 1.0693 |
| 0.0419 | 7.5990 | 3070 | 1.1236 | 0.5588 | 1.1236 | 1.0600 |
| 0.0419 | 7.6040 | 3072 | 1.1115 | 0.5588 | 1.1115 | 1.0543 |
| 0.0419 | 7.6089 | 3074 | 1.1062 | 0.5588 | 1.1062 | 1.0518 |
| 0.0419 | 7.6139 | 3076 | 1.0950 | 0.5588 | 1.0950 | 1.0464 |
| 0.0419 | 7.6188 | 3078 | 1.0858 | 0.5588 | 1.0858 | 1.0420 |
| 0.0419 | 7.6238 | 3080 | 1.0810 | 0.5588 | 1.0810 | 1.0397 |
| 0.0419 | 7.6287 | 3082 | 1.0847 | 0.5588 | 1.0847 | 1.0415 |
| 0.0419 | 7.6337 | 3084 | 1.0857 | 0.5588 | 1.0857 | 1.0420 |
| 0.0419 | 7.6386 | 3086 | 1.0895 | 0.5588 | 1.0895 | 1.0438 |
| 0.0419 | 7.6436 | 3088 | 1.0931 | 0.5408 | 1.0931 | 1.0455 |
| 0.0419 | 7.6485 | 3090 | 1.1079 | 0.5349 | 1.1079 | 1.0526 |
| 0.0419 | 7.6535 | 3092 | 1.1233 | 0.5349 | 1.1233 | 1.0599 |
| 0.0419 | 7.6584 | 3094 | 1.1413 | 0.5349 | 1.1413 | 1.0683 |
| 0.0419 | 7.6634 | 3096 | 1.1585 | 0.5349 | 1.1585 | 1.0763 |
| 0.0419 | 7.6683 | 3098 | 1.1737 | 0.5349 | 1.1737 | 1.0834 |
| 0.0419 | 7.6733 | 3100 | 1.1717 | 0.5349 | 1.1717 | 1.0824 |
| 0.0419 | 7.6782 | 3102 | 1.1782 | 0.5229 | 1.1782 | 1.0854 |
| 0.0419 | 7.6832 | 3104 | 1.1796 | 0.5229 | 1.1796 | 1.0861 |
| 0.0419 | 7.6881 | 3106 | 1.1811 | 0.5229 | 1.1811 | 1.0868 |
| 0.0419 | 7.6931 | 3108 | 1.1705 | 0.5349 | 1.1705 | 1.0819 |
| 0.0419 | 7.6980 | 3110 | 1.1703 | 0.5349 | 1.1703 | 1.0818 |
| 0.0419 | 7.7030 | 3112 | 1.1660 | 0.5349 | 1.1660 | 1.0798 |
| 0.0419 | 7.7079 | 3114 | 1.1586 | 0.5229 | 1.1586 | 1.0764 |
| 0.0419 | 7.7129 | 3116 | 1.1469 | 0.5349 | 1.1469 | 1.0709 |
| 0.0419 | 7.7178 | 3118 | 1.1337 | 0.5349 | 1.1337 | 1.0648 |
| 0.0419 | 7.7228 | 3120 | 1.1205 | 0.5368 | 1.1205 | 1.0585 |
| 0.0419 | 7.7277 | 3122 | 1.1099 | 0.5368 | 1.1099 | 1.0535 |
| 0.0419 | 7.7327 | 3124 | 1.0460 | 0.5613 | 1.0460 | 1.0227 |
| 0.0419 | 7.7376 | 3126 | 1.0062 | 0.5613 | 1.0062 | 1.0031 |
| 0.0419 | 7.7426 | 3128 | 0.9801 | 0.5534 | 0.9801 | 0.9900 |
| 0.0419 | 7.7475 | 3130 | 0.9557 | 0.5792 | 0.9557 | 0.9776 |
| 0.0419 | 7.7525 | 3132 | 0.9395 | 0.5785 | 0.9395 | 0.9693 |
| 0.0419 | 7.7574 | 3134 | 0.9233 | 0.5785 | 0.9233 | 0.9609 |
| 0.0419 | 7.7624 | 3136 | 0.9094 | 0.6033 | 0.9094 | 0.9536 |
| 0.0419 | 7.7673 | 3138 | 0.8942 | 0.6033 | 0.8942 | 0.9456 |
| 0.0419 | 7.7723 | 3140 | 0.8937 | 0.6033 | 0.8937 | 0.9454 |
| 0.0419 | 7.7772 | 3142 | 0.8897 | 0.6382 | 0.8897 | 0.9432 |
| 0.0419 | 7.7822 | 3144 | 0.8826 | 0.6382 | 0.8826 | 0.9395 |
| 0.0419 | 7.7871 | 3146 | 0.8484 | 0.6192 | 0.8484 | 0.9211 |
| 0.0419 | 7.7921 | 3148 | 0.8120 | 0.6087 | 0.8120 | 0.9011 |
| 0.0419 | 7.7970 | 3150 | 0.7869 | 0.6087 | 0.7869 | 0.8871 |
| 0.0419 | 7.8020 | 3152 | 0.7539 | 0.5889 | 0.7539 | 0.8683 |
| 0.0419 | 7.8069 | 3154 | 0.7265 | 0.5919 | 0.7265 | 0.8523 |
| 0.0419 | 7.8119 | 3156 | 0.7124 | 0.5984 | 0.7124 | 0.8440 |
| 0.0419 | 7.8168 | 3158 | 0.7098 | 0.5984 | 0.7098 | 0.8425 |
| 0.0419 | 7.8218 | 3160 | 0.7121 | 0.5984 | 0.7121 | 0.8439 |
| 0.0419 | 7.8267 | 3162 | 0.7218 | 0.5951 | 0.7218 | 0.8496 |
| 0.0419 | 7.8317 | 3164 | 0.7504 | 0.5899 | 0.7504 | 0.8663 |
| 0.0419 | 7.8366 | 3166 | 0.7783 | 0.5746 | 0.7783 | 0.8822 |
| 0.0419 | 7.8416 | 3168 | 0.8057 | 0.6087 | 0.8057 | 0.8976 |
| 0.0419 | 7.8465 | 3170 | 0.8370 | 0.6278 | 0.8370 | 0.9149 |
| 0.0419 | 7.8515 | 3172 | 0.8623 | 0.6298 | 0.8623 | 0.9286 |
| 0.0419 | 7.8564 | 3174 | 0.8866 | 0.6136 | 0.8866 | 0.9416 |
| 0.0419 | 7.8614 | 3176 | 0.9163 | 0.6049 | 0.9163 | 0.9572 |
| 0.0419 | 7.8663 | 3178 | 0.9418 | 0.5709 | 0.9418 | 0.9704 |
| 0.0419 | 7.8713 | 3180 | 0.9636 | 0.5709 | 0.9636 | 0.9817 |
| 0.0419 | 7.8762 | 3182 | 0.9801 | 0.5709 | 0.9801 | 0.9900 |
| 0.0419 | 7.8812 | 3184 | 0.9789 | 0.5709 | 0.9789 | 0.9894 |
| 0.0419 | 7.8861 | 3186 | 0.9577 | 0.5709 | 0.9577 | 0.9786 |
| 0.0419 | 7.8911 | 3188 | 0.9493 | 0.5709 | 0.9493 | 0.9743 |
| 0.0419 | 7.8960 | 3190 | 0.9387 | 0.5709 | 0.9387 | 0.9688 |
| 0.0419 | 7.9010 | 3192 | 0.9363 | 0.6049 | 0.9363 | 0.9676 |
| 0.0419 | 7.9059 | 3194 | 0.9333 | 0.6049 | 0.9333 | 0.9661 |
| 0.0419 | 7.9109 | 3196 | 0.9463 | 0.5709 | 0.9463 | 0.9728 |
| 0.0419 | 7.9158 | 3198 | 0.9508 | 0.5709 | 0.9508 | 0.9751 |
| 0.0419 | 7.9208 | 3200 | 0.9556 | 0.5709 | 0.9556 | 0.9775 |
| 0.0419 | 7.9257 | 3202 | 0.9505 | 0.5709 | 0.9505 | 0.9749 |
| 0.0419 | 7.9307 | 3204 | 0.9545 | 0.5511 | 0.9545 | 0.9770 |
| 0.0419 | 7.9356 | 3206 | 0.9561 | 0.5534 | 0.9561 | 0.9778 |
| 0.0419 | 7.9406 | 3208 | 0.9521 | 0.5534 | 0.9521 | 0.9758 |
| 0.0419 | 7.9455 | 3210 | 0.9511 | 0.5534 | 0.9511 | 0.9753 |
| 0.0419 | 7.9505 | 3212 | 0.9483 | 0.5534 | 0.9483 | 0.9738 |
| 0.0419 | 7.9554 | 3214 | 0.9469 | 0.5534 | 0.9469 | 0.9731 |
| 0.0419 | 7.9604 | 3216 | 0.9391 | 0.5534 | 0.9391 | 0.9691 |
| 0.0419 | 7.9653 | 3218 | 0.9399 | 0.5534 | 0.9399 | 0.9695 |
| 0.0419 | 7.9703 | 3220 | 0.9528 | 0.5736 | 0.9528 | 0.9761 |
| 0.0419 | 7.9752 | 3222 | 0.9578 | 0.5736 | 0.9578 | 0.9787 |
| 0.0419 | 7.9802 | 3224 | 0.9504 | 0.5736 | 0.9504 | 0.9749 |
| 0.0419 | 7.9851 | 3226 | 0.9373 | 0.5736 | 0.9373 | 0.9681 |
| 0.0419 | 7.9901 | 3228 | 0.9269 | 0.5736 | 0.9269 | 0.9627 |
| 0.0419 | 7.9950 | 3230 | 0.9117 | 0.5888 | 0.9117 | 0.9548 |
| 0.0419 | 8.0 | 3232 | 0.9027 | 0.5687 | 0.9027 | 0.9501 |
| 0.0419 | 8.0050 | 3234 | 0.9139 | 0.5888 | 0.9139 | 0.9560 |
| 0.0419 | 8.0099 | 3236 | 0.9285 | 0.5534 | 0.9285 | 0.9636 |
| 0.0419 | 8.0149 | 3238 | 0.8951 | 0.5687 | 0.8951 | 0.9461 |
| 0.0419 | 8.0198 | 3240 | 0.8654 | 0.5687 | 0.8654 | 0.9302 |
| 0.0419 | 8.0248 | 3242 | 0.8362 | 0.6271 | 0.8362 | 0.9145 |
| 0.0419 | 8.0297 | 3244 | 0.8136 | 0.5829 | 0.8136 | 0.9020 |
| 0.0419 | 8.0347 | 3246 | 0.7919 | 0.5829 | 0.7919 | 0.8899 |
| 0.0419 | 8.0396 | 3248 | 0.7748 | 0.5820 | 0.7748 | 0.8802 |
| 0.0419 | 8.0446 | 3250 | 0.7630 | 0.5820 | 0.7630 | 0.8735 |
| 0.0419 | 8.0495 | 3252 | 0.7518 | 0.6377 | 0.7518 | 0.8671 |
| 0.0419 | 8.0545 | 3254 | 0.7463 | 0.6377 | 0.7463 | 0.8639 |
| 0.0419 | 8.0594 | 3256 | 0.7450 | 0.6377 | 0.7450 | 0.8631 |
| 0.0419 | 8.0644 | 3258 | 0.7473 | 0.6377 | 0.7473 | 0.8644 |
| 0.0419 | 8.0693 | 3260 | 0.7508 | 0.6377 | 0.7508 | 0.8665 |
| 0.0419 | 8.0743 | 3262 | 0.7541 | 0.6377 | 0.7541 | 0.8684 |
| 0.0419 | 8.0792 | 3264 | 0.7584 | 0.6016 | 0.7584 | 0.8709 |
| 0.0419 | 8.0842 | 3266 | 0.7597 | 0.6016 | 0.7597 | 0.8716 |
| 0.0419 | 8.0891 | 3268 | 0.7644 | 0.6016 | 0.7644 | 0.8743 |
| 0.0419 | 8.0941 | 3270 | 0.7728 | 0.6016 | 0.7728 | 0.8791 |
| 0.0419 | 8.0990 | 3272 | 0.7870 | 0.6029 | 0.7870 | 0.8872 |
| 0.0419 | 8.1040 | 3274 | 0.8014 | 0.5820 | 0.8014 | 0.8952 |
| 0.0419 | 8.1089 | 3276 | 0.8094 | 0.5820 | 0.8094 | 0.8997 |
| 0.0419 | 8.1139 | 3278 | 0.8168 | 0.5759 | 0.8168 | 0.9037 |
| 0.0419 | 8.1188 | 3280 | 0.8256 | 0.5974 | 0.8256 | 0.9087 |
| 0.0419 | 8.1238 | 3282 | 0.8354 | 0.5709 | 0.8354 | 0.9140 |
| 0.0419 | 8.1287 | 3284 | 0.8527 | 0.5682 | 0.8527 | 0.9234 |
| 0.0419 | 8.1337 | 3286 | 0.8737 | 0.6082 | 0.8737 | 0.9347 |
| 0.0419 | 8.1386 | 3288 | 0.8947 | 0.6082 | 0.8947 | 0.9459 |
| 0.0419 | 8.1436 | 3290 | 0.9148 | 0.6082 | 0.9148 | 0.9565 |
| 0.0419 | 8.1485 | 3292 | 0.9259 | 0.6082 | 0.9259 | 0.9622 |
| 0.0419 | 8.1535 | 3294 | 0.9271 | 0.6082 | 0.9271 | 0.9628 |
| 0.0419 | 8.1584 | 3296 | 0.9253 | 0.6082 | 0.9253 | 0.9619 |
| 0.0419 | 8.1634 | 3298 | 0.9303 | 0.6082 | 0.9303 | 0.9645 |
| 0.0419 | 8.1683 | 3300 | 0.9419 | 0.6082 | 0.9419 | 0.9705 |
| 0.0419 | 8.1733 | 3302 | 0.9564 | 0.6049 | 0.9564 | 0.9780 |
| 0.0419 | 8.1782 | 3304 | 0.9649 | 0.5709 | 0.9649 | 0.9823 |
| 0.0419 | 8.1832 | 3306 | 0.9648 | 0.5709 | 0.9648 | 0.9822 |
| 0.0419 | 8.1881 | 3308 | 0.9674 | 0.5709 | 0.9674 | 0.9835 |
| 0.0419 | 8.1931 | 3310 | 0.9635 | 0.5858 | 0.9635 | 0.9816 |
| 0.0419 | 8.1980 | 3312 | 0.9521 | 0.5888 | 0.9521 | 0.9757 |
| 0.0419 | 8.2030 | 3314 | 0.9489 | 0.5888 | 0.9489 | 0.9741 |
| 0.0419 | 8.2079 | 3316 | 0.9513 | 0.5888 | 0.9513 | 0.9753 |
| 0.0419 | 8.2129 | 3318 | 0.9488 | 0.5888 | 0.9488 | 0.9740 |
| 0.0419 | 8.2178 | 3320 | 0.9456 | 0.5879 | 0.9456 | 0.9724 |
| 0.0419 | 8.2228 | 3322 | 0.9443 | 0.5879 | 0.9443 | 0.9718 |
| 0.0419 | 8.2277 | 3324 | 0.9446 | 0.5879 | 0.9446 | 0.9719 |
| 0.0419 | 8.2327 | 3326 | 0.9543 | 0.5879 | 0.9543 | 0.9769 |
| 0.0419 | 8.2376 | 3328 | 0.9620 | 0.6036 | 0.9620 | 0.9808 |
| 0.0419 | 8.2426 | 3330 | 0.9638 | 0.6036 | 0.9638 | 0.9817 |
| 0.0419 | 8.2475 | 3332 | 0.9617 | 0.6036 | 0.9617 | 0.9807 |
| 0.0419 | 8.2525 | 3334 | 0.9573 | 0.6023 | 0.9573 | 0.9784 |
| 0.0419 | 8.2574 | 3336 | 0.9580 | 0.6023 | 0.9580 | 0.9788 |
| 0.0419 | 8.2624 | 3338 | 0.9577 | 0.6036 | 0.9577 | 0.9786 |
| 0.0419 | 8.2673 | 3340 | 0.9624 | 0.6049 | 0.9624 | 0.9810 |
| 0.0419 | 8.2723 | 3342 | 0.9724 | 0.6049 | 0.9724 | 0.9861 |
| 0.0419 | 8.2772 | 3344 | 0.9883 | 0.6049 | 0.9883 | 0.9942 |
| 0.0419 | 8.2822 | 3346 | 1.0062 | 0.5709 | 1.0062 | 1.0031 |
| 0.0419 | 8.2871 | 3348 | 1.0243 | 0.5468 | 1.0243 | 1.0121 |
| 0.0419 | 8.2921 | 3350 | 1.0440 | 0.5349 | 1.0440 | 1.0218 |
| 0.0419 | 8.2970 | 3352 | 1.0541 | 0.5349 | 1.0541 | 1.0267 |
| 0.0419 | 8.3020 | 3354 | 1.0590 | 0.5349 | 1.0590 | 1.0291 |
| 0.0419 | 8.3069 | 3356 | 1.0622 | 0.5349 | 1.0622 | 1.0306 |
| 0.0419 | 8.3119 | 3358 | 1.0618 | 0.5349 | 1.0618 | 1.0305 |
| 0.0419 | 8.3168 | 3360 | 1.0552 | 0.5349 | 1.0552 | 1.0272 |
| 0.0419 | 8.3218 | 3362 | 1.0439 | 0.5147 | 1.0439 | 1.0217 |
| 0.0419 | 8.3267 | 3364 | 1.0415 | 0.5147 | 1.0415 | 1.0205 |
| 0.0419 | 8.3317 | 3366 | 1.0455 | 0.5147 | 1.0455 | 1.0225 |
| 0.0419 | 8.3366 | 3368 | 1.0453 | 0.5147 | 1.0453 | 1.0224 |
| 0.0419 | 8.3416 | 3370 | 1.0489 | 0.5147 | 1.0489 | 1.0242 |
| 0.0419 | 8.3465 | 3372 | 1.0586 | 0.5147 | 1.0586 | 1.0289 |
| 0.0419 | 8.3515 | 3374 | 1.0662 | 0.5147 | 1.0662 | 1.0326 |
| 0.0419 | 8.3564 | 3376 | 1.0751 | 0.5147 | 1.0751 | 1.0369 |
| 0.0419 | 8.3614 | 3378 | 1.0761 | 0.5147 | 1.0761 | 1.0373 |
| 0.0419 | 8.3663 | 3380 | 1.0756 | 0.5147 | 1.0756 | 1.0371 |
| 0.0419 | 8.3713 | 3382 | 1.0709 | 0.5147 | 1.0709 | 1.0348 |
| 0.0419 | 8.3762 | 3384 | 1.0626 | 0.5147 | 1.0626 | 1.0308 |
| 0.0419 | 8.3812 | 3386 | 1.0585 | 0.5147 | 1.0585 | 1.0288 |
| 0.0419 | 8.3861 | 3388 | 1.0590 | 0.5147 | 1.0590 | 1.0291 |
| 0.0419 | 8.3911 | 3390 | 1.0601 | 0.5147 | 1.0601 | 1.0296 |
| 0.0419 | 8.3960 | 3392 | 1.0666 | 0.5147 | 1.0666 | 1.0328 |
| 0.0419 | 8.4010 | 3394 | 1.0797 | 0.5147 | 1.0797 | 1.0391 |
| 0.0419 | 8.4059 | 3396 | 1.0810 | 0.5147 | 1.0810 | 1.0397 |
| 0.0419 | 8.4109 | 3398 | 1.0789 | 0.5147 | 1.0789 | 1.0387 |
| 0.0419 | 8.4158 | 3400 | 1.0780 | 0.5147 | 1.0780 | 1.0383 |
| 0.0419 | 8.4208 | 3402 | 1.0860 | 0.5147 | 1.0860 | 1.0421 |
| 0.0419 | 8.4257 | 3404 | 1.0972 | 0.5147 | 1.0972 | 1.0475 |
| 0.0419 | 8.4307 | 3406 | 1.1108 | 0.5147 | 1.1108 | 1.0540 |
| 0.0419 | 8.4356 | 3408 | 1.1183 | 0.5147 | 1.1183 | 1.0575 |
| 0.0419 | 8.4406 | 3410 | 1.1271 | 0.5147 | 1.1271 | 1.0616 |
| 0.0419 | 8.4455 | 3412 | 1.1312 | 0.4916 | 1.1312 | 1.0636 |
| 0.0419 | 8.4505 | 3414 | 1.1348 | 0.5118 | 1.1348 | 1.0653 |
| 0.0419 | 8.4554 | 3416 | 1.1345 | 0.5118 | 1.1345 | 1.0651 |
| 0.0419 | 8.4604 | 3418 | 1.1308 | 0.5118 | 1.1308 | 1.0634 |
| 0.0419 | 8.4653 | 3420 | 1.1292 | 0.5118 | 1.1292 | 1.0627 |
| 0.0419 | 8.4703 | 3422 | 1.1230 | 0.5118 | 1.1230 | 1.0597 |
| 0.0419 | 8.4752 | 3424 | 1.1182 | 0.5118 | 1.1182 | 1.0574 |
| 0.0419 | 8.4802 | 3426 | 1.1054 | 0.5118 | 1.1054 | 1.0514 |
| 0.0419 | 8.4851 | 3428 | 1.0922 | 0.5118 | 1.0922 | 1.0451 |
| 0.0419 | 8.4901 | 3430 | 1.0465 | 0.5468 | 1.0465 | 1.0230 |
| 0.0419 | 8.4950 | 3432 | 1.0117 | 0.5709 | 1.0117 | 1.0058 |
| 0.0419 | 8.5 | 3434 | 0.9875 | 0.6049 | 0.9875 | 0.9937 |
| 0.0419 | 8.5050 | 3436 | 0.9658 | 0.6049 | 0.9658 | 0.9827 |
| 0.0419 | 8.5099 | 3438 | 0.9512 | 0.6023 | 0.9512 | 0.9753 |
| 0.0419 | 8.5149 | 3440 | 0.9362 | 0.5841 | 0.9362 | 0.9676 |
| 0.0419 | 8.5198 | 3442 | 0.9189 | 0.5771 | 0.9189 | 0.9586 |
| 0.0419 | 8.5248 | 3444 | 0.8995 | 0.5771 | 0.8995 | 0.9484 |
| 0.0419 | 8.5297 | 3446 | 0.8863 | 0.5919 | 0.8863 | 0.9414 |
| 0.0419 | 8.5347 | 3448 | 0.8715 | 0.5694 | 0.8715 | 0.9335 |
| 0.0419 | 8.5396 | 3450 | 0.8625 | 0.5694 | 0.8625 | 0.9287 |
| 0.0419 | 8.5446 | 3452 | 0.8591 | 0.5694 | 0.8591 | 0.9269 |
| 0.0419 | 8.5495 | 3454 | 0.8559 | 0.5694 | 0.8559 | 0.9252 |
| 0.0419 | 8.5545 | 3456 | 0.8564 | 0.5694 | 0.8564 | 0.9254 |
| 0.0419 | 8.5594 | 3458 | 0.8598 | 0.5694 | 0.8598 | 0.9272 |
| 0.0419 | 8.5644 | 3460 | 0.8647 | 0.5694 | 0.8647 | 0.9299 |
| 0.0419 | 8.5693 | 3462 | 0.8685 | 0.5694 | 0.8685 | 0.9319 |
| 0.0419 | 8.5743 | 3464 | 0.8669 | 0.5694 | 0.8669 | 0.9311 |
| 0.0419 | 8.5792 | 3466 | 0.8664 | 0.5939 | 0.8664 | 0.9308 |
| 0.0419 | 8.5842 | 3468 | 0.8675 | 0.5939 | 0.8675 | 0.9314 |
| 0.0419 | 8.5891 | 3470 | 0.8656 | 0.5939 | 0.8656 | 0.9304 |
| 0.0419 | 8.5941 | 3472 | 0.8622 | 0.5939 | 0.8622 | 0.9286 |
| 0.0419 | 8.5990 | 3474 | 0.8619 | 0.5939 | 0.8619 | 0.9284 |
| 0.0419 | 8.6040 | 3476 | 0.8616 | 0.5939 | 0.8616 | 0.9282 |
| 0.0419 | 8.6089 | 3478 | 0.8590 | 0.5939 | 0.8590 | 0.9268 |
| 0.0419 | 8.6139 | 3480 | 0.8585 | 0.6175 | 0.8585 | 0.9266 |
| 0.0419 | 8.6188 | 3482 | 0.8595 | 0.6175 | 0.8595 | 0.9271 |
| 0.0419 | 8.6238 | 3484 | 0.8564 | 0.6175 | 0.8564 | 0.9254 |
| 0.0419 | 8.6287 | 3486 | 0.8535 | 0.6175 | 0.8535 | 0.9238 |
| 0.0419 | 8.6337 | 3488 | 0.8511 | 0.5781 | 0.8511 | 0.9225 |
| 0.0419 | 8.6386 | 3490 | 0.8493 | 0.5811 | 0.8493 | 0.9216 |
| 0.0419 | 8.6436 | 3492 | 0.8476 | 0.5811 | 0.8476 | 0.9206 |
| 0.0419 | 8.6485 | 3494 | 0.8447 | 0.5811 | 0.8447 | 0.9191 |
| 0.0419 | 8.6535 | 3496 | 0.8471 | 0.6016 | 0.8471 | 0.9204 |
| 0.0419 | 8.6584 | 3498 | 0.8545 | 0.5982 | 0.8545 | 0.9244 |
| 0.0373 | 8.6634 | 3500 | 0.8651 | 0.5982 | 0.8651 | 0.9301 |
| 0.0373 | 8.6683 | 3502 | 0.8726 | 0.5919 | 0.8726 | 0.9341 |
| 0.0373 | 8.6733 | 3504 | 0.8781 | 0.5919 | 0.8781 | 0.9371 |
| 0.0373 | 8.6782 | 3506 | 0.8884 | 0.6020 | 0.8884 | 0.9425 |
| 0.0373 | 8.6832 | 3508 | 0.8967 | 0.6020 | 0.8967 | 0.9469 |
| 0.0373 | 8.6881 | 3510 | 0.9060 | 0.6121 | 0.9060 | 0.9518 |
| 0.0373 | 8.6931 | 3512 | 0.9162 | 0.6121 | 0.9162 | 0.9572 |
| 0.0373 | 8.6980 | 3514 | 0.9295 | 0.5841 | 0.9295 | 0.9641 |
| 0.0373 | 8.7030 | 3516 | 0.9423 | 0.5841 | 0.9423 | 0.9707 |
| 0.0373 | 8.7079 | 3518 | 0.9574 | 0.5841 | 0.9574 | 0.9785 |
| 0.0373 | 8.7129 | 3520 | 0.9651 | 0.6023 | 0.9651 | 0.9824 |
| 0.0373 | 8.7178 | 3522 | 0.9660 | 0.6023 | 0.9660 | 0.9828 |
| 0.0373 | 8.7228 | 3524 | 0.9693 | 0.6023 | 0.9693 | 0.9845 |
| 0.0373 | 8.7277 | 3526 | 0.9738 | 0.6023 | 0.9738 | 0.9868 |
| 0.0373 | 8.7327 | 3528 | 0.9760 | 0.6023 | 0.9760 | 0.9879 |
| 0.0373 | 8.7376 | 3530 | 0.9745 | 0.5841 | 0.9745 | 0.9872 |
| 0.0373 | 8.7426 | 3532 | 0.9683 | 0.5841 | 0.9683 | 0.9840 |
| 0.0373 | 8.7475 | 3534 | 0.9619 | 0.5841 | 0.9619 | 0.9808 |
| 0.0373 | 8.7525 | 3536 | 0.9552 | 0.5841 | 0.9552 | 0.9773 |
| 0.0373 | 8.7574 | 3538 | 0.9483 | 0.5841 | 0.9483 | 0.9738 |
| 0.0373 | 8.7624 | 3540 | 0.9412 | 0.5652 | 0.9412 | 0.9702 |
| 0.0373 | 8.7673 | 3542 | 0.9313 | 0.5652 | 0.9313 | 0.9650 |
| 0.0373 | 8.7723 | 3544 | 0.9233 | 0.5652 | 0.9233 | 0.9609 |
| 0.0373 | 8.7772 | 3546 | 0.9132 | 0.5677 | 0.9132 | 0.9556 |
| 0.0373 | 8.7822 | 3548 | 0.9022 | 0.5930 | 0.9022 | 0.9499 |
| 0.0373 | 8.7871 | 3550 | 0.8979 | 0.5827 | 0.8979 | 0.9476 |
| 0.0373 | 8.7921 | 3552 | 0.8894 | 0.5725 | 0.8894 | 0.9431 |
| 0.0373 | 8.7970 | 3554 | 0.8815 | 0.5725 | 0.8815 | 0.9389 |
| 0.0373 | 8.8020 | 3556 | 0.8758 | 0.5725 | 0.8758 | 0.9359 |
| 0.0373 | 8.8069 | 3558 | 0.8724 | 0.5919 | 0.8724 | 0.9340 |
| 0.0373 | 8.8119 | 3560 | 0.8681 | 0.5919 | 0.8681 | 0.9317 |
| 0.0373 | 8.8168 | 3562 | 0.8649 | 0.5919 | 0.8649 | 0.9300 |
| 0.0373 | 8.8218 | 3564 | 0.8637 | 0.6175 | 0.8637 | 0.9294 |
| 0.0373 | 8.8267 | 3566 | 0.8650 | 0.6175 | 0.8650 | 0.9300 |
| 0.0373 | 8.8317 | 3568 | 0.8690 | 0.5919 | 0.8690 | 0.9322 |
| 0.0373 | 8.8366 | 3570 | 0.8728 | 0.5919 | 0.8728 | 0.9342 |
| 0.0373 | 8.8416 | 3572 | 0.8751 | 0.5919 | 0.8751 | 0.9355 |
| 0.0373 | 8.8465 | 3574 | 0.8785 | 0.5919 | 0.8785 | 0.9373 |
| 0.0373 | 8.8515 | 3576 | 0.8781 | 0.5919 | 0.8781 | 0.9371 |
| 0.0373 | 8.8564 | 3578 | 0.8826 | 0.5919 | 0.8826 | 0.9395 |
| 0.0373 | 8.8614 | 3580 | 0.8928 | 0.5919 | 0.8928 | 0.9449 |
| 0.0373 | 8.8663 | 3582 | 0.9091 | 0.5673 | 0.9091 | 0.9535 |
| 0.0373 | 8.8713 | 3584 | 0.9255 | 0.5928 | 0.9255 | 0.9620 |
| 0.0373 | 8.8762 | 3586 | 0.9369 | 0.5841 | 0.9369 | 0.9679 |
| 0.0373 | 8.8812 | 3588 | 0.9485 | 0.5841 | 0.9485 | 0.9739 |
| 0.0373 | 8.8861 | 3590 | 0.9573 | 0.5841 | 0.9573 | 0.9784 |
| 0.0373 | 8.8911 | 3592 | 0.9618 | 0.5841 | 0.9618 | 0.9807 |
| 0.0373 | 8.8960 | 3594 | 0.9525 | 0.5841 | 0.9525 | 0.9760 |
| 0.0373 | 8.9010 | 3596 | 0.9464 | 0.5841 | 0.9464 | 0.9728 |
| 0.0373 | 8.9059 | 3598 | 0.9444 | 0.5841 | 0.9444 | 0.9718 |
| 0.0373 | 8.9109 | 3600 | 0.9425 | 0.5841 | 0.9425 | 0.9708 |
| 0.0373 | 8.9158 | 3602 | 0.9433 | 0.6023 | 0.9433 | 0.9712 |
| 0.0373 | 8.9208 | 3604 | 0.9433 | 0.6023 | 0.9433 | 0.9712 |
| 0.0373 | 8.9257 | 3606 | 0.9398 | 0.6023 | 0.9398 | 0.9694 |
| 0.0373 | 8.9307 | 3608 | 0.9376 | 0.6023 | 0.9376 | 0.9683 |
| 0.0373 | 8.9356 | 3610 | 0.9346 | 0.6023 | 0.9346 | 0.9667 |
| 0.0373 | 8.9406 | 3612 | 0.9295 | 0.5928 | 0.9295 | 0.9641 |
| 0.0373 | 8.9455 | 3614 | 0.9248 | 0.5928 | 0.9248 | 0.9617 |
| 0.0373 | 8.9505 | 3616 | 0.9215 | 0.5928 | 0.9215 | 0.9599 |
| 0.0373 | 8.9554 | 3618 | 0.9200 | 0.5771 | 0.9200 | 0.9592 |
| 0.0373 | 8.9604 | 3620 | 0.9187 | 0.5771 | 0.9187 | 0.9585 |
| 0.0373 | 8.9653 | 3622 | 0.9205 | 0.5870 | 0.9205 | 0.9594 |
| 0.0373 | 8.9703 | 3624 | 0.9251 | 0.5870 | 0.9251 | 0.9618 |
| 0.0373 | 8.9752 | 3626 | 0.9288 | 0.5870 | 0.9288 | 0.9637 |
| 0.0373 | 8.9802 | 3628 | 0.9297 | 0.5870 | 0.9297 | 0.9642 |
| 0.0373 | 8.9851 | 3630 | 0.9296 | 0.5870 | 0.9296 | 0.9642 |
| 0.0373 | 8.9901 | 3632 | 0.9286 | 0.5870 | 0.9286 | 0.9636 |
| 0.0373 | 8.9950 | 3634 | 0.9292 | 0.5870 | 0.9292 | 0.9639 |
| 0.0373 | 9.0 | 3636 | 0.9327 | 0.5850 | 0.9327 | 0.9658 |
| 0.0373 | 9.0050 | 3638 | 0.9375 | 0.5850 | 0.9375 | 0.9682 |
| 0.0373 | 9.0099 | 3640 | 0.9444 | 0.6036 | 0.9444 | 0.9718 |
| 0.0373 | 9.0149 | 3642 | 0.9546 | 0.6049 | 0.9546 | 0.9770 |
| 0.0373 | 9.0198 | 3644 | 0.9619 | 0.6049 | 0.9619 | 0.9808 |
| 0.0373 | 9.0248 | 3646 | 0.9719 | 0.6049 | 0.9719 | 0.9858 |
| 0.0373 | 9.0297 | 3648 | 0.9812 | 0.6049 | 0.9812 | 0.9906 |
| 0.0373 | 9.0347 | 3650 | 0.9905 | 0.5709 | 0.9905 | 0.9952 |
| 0.0373 | 9.0396 | 3652 | 0.9947 | 0.5709 | 0.9947 | 0.9973 |
| 0.0373 | 9.0446 | 3654 | 0.9955 | 0.5709 | 0.9955 | 0.9977 |
| 0.0373 | 9.0495 | 3656 | 0.9914 | 0.5709 | 0.9914 | 0.9957 |
| 0.0373 | 9.0545 | 3658 | 0.9880 | 0.5709 | 0.9880 | 0.9940 |
| 0.0373 | 9.0594 | 3660 | 0.9845 | 0.6049 | 0.9845 | 0.9922 |
| 0.0373 | 9.0644 | 3662 | 0.9811 | 0.6049 | 0.9811 | 0.9905 |
| 0.0373 | 9.0693 | 3664 | 0.9771 | 0.6049 | 0.9771 | 0.9885 |
| 0.0373 | 9.0743 | 3666 | 0.9676 | 0.6049 | 0.9676 | 0.9837 |
| 0.0373 | 9.0792 | 3668 | 0.9564 | 0.6049 | 0.9564 | 0.9780 |
| 0.0373 | 9.0842 | 3670 | 0.9519 | 0.6036 | 0.9519 | 0.9756 |
| 0.0373 | 9.0891 | 3672 | 0.9484 | 0.6036 | 0.9484 | 0.9739 |
| 0.0373 | 9.0941 | 3674 | 0.9472 | 0.6036 | 0.9472 | 0.9732 |
| 0.0373 | 9.0990 | 3676 | 0.9447 | 0.6036 | 0.9447 | 0.9719 |
| 0.0373 | 9.1040 | 3678 | 0.9399 | 0.6036 | 0.9399 | 0.9695 |
| 0.0373 | 9.1089 | 3680 | 0.9349 | 0.6023 | 0.9349 | 0.9669 |
| 0.0373 | 9.1139 | 3682 | 0.9297 | 0.6023 | 0.9297 | 0.9642 |
| 0.0373 | 9.1188 | 3684 | 0.9239 | 0.6023 | 0.9239 | 0.9612 |
| 0.0373 | 9.1238 | 3686 | 0.9166 | 0.5870 | 0.9166 | 0.9574 |
| 0.0373 | 9.1287 | 3688 | 0.9105 | 0.5870 | 0.9105 | 0.9542 |
| 0.0373 | 9.1337 | 3690 | 0.9057 | 0.5870 | 0.9057 | 0.9517 |
| 0.0373 | 9.1386 | 3692 | 0.9046 | 0.5870 | 0.9046 | 0.9511 |
| 0.0373 | 9.1436 | 3694 | 0.9044 | 0.5870 | 0.9044 | 0.9510 |
| 0.0373 | 9.1485 | 3696 | 0.9013 | 0.5870 | 0.9013 | 0.9494 |
| 0.0373 | 9.1535 | 3698 | 0.9015 | 0.5870 | 0.9015 | 0.9495 |
| 0.0373 | 9.1584 | 3700 | 0.9049 | 0.5879 | 0.9049 | 0.9512 |
| 0.0373 | 9.1634 | 3702 | 0.9080 | 0.5879 | 0.9080 | 0.9529 |
| 0.0373 | 9.1683 | 3704 | 0.9104 | 0.5879 | 0.9104 | 0.9542 |
| 0.0373 | 9.1733 | 3706 | 0.9125 | 0.5888 | 0.9125 | 0.9552 |
| 0.0373 | 9.1782 | 3708 | 0.9178 | 0.5888 | 0.9178 | 0.9580 |
| 0.0373 | 9.1832 | 3710 | 0.9229 | 0.5858 | 0.9229 | 0.9607 |
| 0.0373 | 9.1881 | 3712 | 0.9288 | 0.5858 | 0.9288 | 0.9638 |
| 0.0373 | 9.1931 | 3714 | 0.9316 | 0.5858 | 0.9316 | 0.9652 |
| 0.0373 | 9.1980 | 3716 | 0.9347 | 0.5858 | 0.9347 | 0.9668 |
| 0.0373 | 9.2030 | 3718 | 0.9388 | 0.5858 | 0.9388 | 0.9689 |
| 0.0373 | 9.2079 | 3720 | 0.9432 | 0.5858 | 0.9432 | 0.9712 |
| 0.0373 | 9.2129 | 3722 | 0.9501 | 0.5709 | 0.9501 | 0.9747 |
| 0.0373 | 9.2178 | 3724 | 0.9555 | 0.5709 | 0.9555 | 0.9775 |
| 0.0373 | 9.2228 | 3726 | 0.9566 | 0.5709 | 0.9566 | 0.9780 |
| 0.0373 | 9.2277 | 3728 | 0.9593 | 0.5709 | 0.9593 | 0.9795 |
| 0.0373 | 9.2327 | 3730 | 0.9574 | 0.5709 | 0.9574 | 0.9785 |
| 0.0373 | 9.2376 | 3732 | 0.9544 | 0.5709 | 0.9544 | 0.9770 |
| 0.0373 | 9.2426 | 3734 | 0.9514 | 0.5709 | 0.9514 | 0.9754 |
| 0.0373 | 9.2475 | 3736 | 0.9482 | 0.5511 | 0.9482 | 0.9738 |
| 0.0373 | 9.2525 | 3738 | 0.9521 | 0.5709 | 0.9521 | 0.9758 |
| 0.0373 | 9.2574 | 3740 | 0.9611 | 0.5709 | 0.9611 | 0.9804 |
| 0.0373 | 9.2624 | 3742 | 0.9698 | 0.5709 | 0.9698 | 0.9848 |
| 0.0373 | 9.2673 | 3744 | 0.9761 | 0.5709 | 0.9761 | 0.9880 |
| 0.0373 | 9.2723 | 3746 | 0.9809 | 0.5709 | 0.9809 | 0.9904 |
| 0.0373 | 9.2772 | 3748 | 0.9860 | 0.5709 | 0.9860 | 0.9930 |
| 0.0373 | 9.2822 | 3750 | 0.9902 | 0.5709 | 0.9902 | 0.9951 |
| 0.0373 | 9.2871 | 3752 | 0.9955 | 0.5709 | 0.9955 | 0.9977 |
| 0.0373 | 9.2921 | 3754 | 0.9974 | 0.5709 | 0.9974 | 0.9987 |
| 0.0373 | 9.2970 | 3756 | 0.9937 | 0.5709 | 0.9937 | 0.9968 |
| 0.0373 | 9.3020 | 3758 | 0.9911 | 0.5709 | 0.9911 | 0.9955 |
| 0.0373 | 9.3069 | 3760 | 0.9894 | 0.5709 | 0.9894 | 0.9947 |
| 0.0373 | 9.3119 | 3762 | 0.9874 | 0.5709 | 0.9874 | 0.9937 |
| 0.0373 | 9.3168 | 3764 | 0.9862 | 0.5709 | 0.9862 | 0.9931 |
| 0.0373 | 9.3218 | 3766 | 0.9854 | 0.5709 | 0.9854 | 0.9927 |
| 0.0373 | 9.3267 | 3768 | 0.9882 | 0.5709 | 0.9882 | 0.9941 |
| 0.0373 | 9.3317 | 3770 | 0.9892 | 0.5709 | 0.9892 | 0.9946 |
| 0.0373 | 9.3366 | 3772 | 0.9918 | 0.5709 | 0.9918 | 0.9959 |
| 0.0373 | 9.3416 | 3774 | 0.9936 | 0.5709 | 0.9936 | 0.9968 |
| 0.0373 | 9.3465 | 3776 | 0.9938 | 0.5709 | 0.9938 | 0.9969 |
| 0.0373 | 9.3515 | 3778 | 0.9947 | 0.5709 | 0.9947 | 0.9973 |
| 0.0373 | 9.3564 | 3780 | 0.9967 | 0.5709 | 0.9967 | 0.9983 |
| 0.0373 | 9.3614 | 3782 | 0.9990 | 0.5709 | 0.9990 | 0.9995 |
| 0.0373 | 9.3663 | 3784 | 0.9998 | 0.5709 | 0.9998 | 0.9999 |
| 0.0373 | 9.3713 | 3786 | 1.0024 | 0.5709 | 1.0024 | 1.0012 |
| 0.0373 | 9.3762 | 3788 | 1.0043 | 0.5709 | 1.0043 | 1.0022 |
| 0.0373 | 9.3812 | 3790 | 1.0106 | 0.5709 | 1.0106 | 1.0053 |
| 0.0373 | 9.3861 | 3792 | 1.0168 | 0.5709 | 1.0168 | 1.0084 |
| 0.0373 | 9.3911 | 3794 | 1.0231 | 0.5709 | 1.0231 | 1.0115 |
| 0.0373 | 9.3960 | 3796 | 1.0278 | 0.5709 | 1.0278 | 1.0138 |
| 0.0373 | 9.4010 | 3798 | 1.0309 | 0.5709 | 1.0309 | 1.0153 |
| 0.0373 | 9.4059 | 3800 | 1.0322 | 0.5709 | 1.0322 | 1.0160 |
| 0.0373 | 9.4109 | 3802 | 1.0311 | 0.5709 | 1.0311 | 1.0154 |
| 0.0373 | 9.4158 | 3804 | 1.0323 | 0.5709 | 1.0323 | 1.0160 |
| 0.0373 | 9.4208 | 3806 | 1.0333 | 0.5709 | 1.0333 | 1.0165 |
| 0.0373 | 9.4257 | 3808 | 1.0329 | 0.5709 | 1.0329 | 1.0163 |
| 0.0373 | 9.4307 | 3810 | 1.0310 | 0.5709 | 1.0310 | 1.0154 |
| 0.0373 | 9.4356 | 3812 | 1.0306 | 0.5709 | 1.0306 | 1.0152 |
| 0.0373 | 9.4406 | 3814 | 1.0307 | 0.5709 | 1.0307 | 1.0152 |
| 0.0373 | 9.4455 | 3816 | 1.0342 | 0.5709 | 1.0342 | 1.0169 |
| 0.0373 | 9.4505 | 3818 | 1.0380 | 0.5709 | 1.0380 | 1.0188 |
| 0.0373 | 9.4554 | 3820 | 1.0423 | 0.5709 | 1.0423 | 1.0209 |
| 0.0373 | 9.4604 | 3822 | 1.0448 | 0.5709 | 1.0448 | 1.0222 |
| 0.0373 | 9.4653 | 3824 | 1.0467 | 0.5709 | 1.0467 | 1.0231 |
| 0.0373 | 9.4703 | 3826 | 1.0464 | 0.5709 | 1.0464 | 1.0229 |
| 0.0373 | 9.4752 | 3828 | 1.0452 | 0.5709 | 1.0452 | 1.0223 |
| 0.0373 | 9.4802 | 3830 | 1.0459 | 0.5709 | 1.0459 | 1.0227 |
| 0.0373 | 9.4851 | 3832 | 1.0475 | 0.5588 | 1.0475 | 1.0235 |
| 0.0373 | 9.4901 | 3834 | 1.0479 | 0.5588 | 1.0479 | 1.0237 |
| 0.0373 | 9.4950 | 3836 | 1.0457 | 0.5588 | 1.0457 | 1.0226 |
| 0.0373 | 9.5 | 3838 | 1.0438 | 0.5588 | 1.0438 | 1.0217 |
| 0.0373 | 9.5050 | 3840 | 1.0406 | 0.5588 | 1.0406 | 1.0201 |
| 0.0373 | 9.5099 | 3842 | 1.0368 | 0.5709 | 1.0368 | 1.0182 |
| 0.0373 | 9.5149 | 3844 | 1.0327 | 0.5588 | 1.0327 | 1.0162 |
| 0.0373 | 9.5198 | 3846 | 1.0306 | 0.5588 | 1.0306 | 1.0152 |
| 0.0373 | 9.5248 | 3848 | 1.0316 | 0.5467 | 1.0316 | 1.0157 |
| 0.0373 | 9.5297 | 3850 | 1.0318 | 0.5263 | 1.0318 | 1.0158 |
| 0.0373 | 9.5347 | 3852 | 1.0350 | 0.5263 | 1.0350 | 1.0174 |
| 0.0373 | 9.5396 | 3854 | 1.0398 | 0.5263 | 1.0398 | 1.0197 |
| 0.0373 | 9.5446 | 3856 | 1.0425 | 0.5263 | 1.0425 | 1.0210 |
| 0.0373 | 9.5495 | 3858 | 1.0459 | 0.5467 | 1.0459 | 1.0227 |
| 0.0373 | 9.5545 | 3860 | 1.0491 | 0.5467 | 1.0491 | 1.0243 |
| 0.0373 | 9.5594 | 3862 | 1.0518 | 0.5467 | 1.0518 | 1.0256 |
| 0.0373 | 9.5644 | 3864 | 1.0500 | 0.5467 | 1.0500 | 1.0247 |
| 0.0373 | 9.5693 | 3866 | 1.0451 | 0.5467 | 1.0451 | 1.0223 |
| 0.0373 | 9.5743 | 3868 | 1.0417 | 0.5467 | 1.0417 | 1.0206 |
| 0.0373 | 9.5792 | 3870 | 1.0412 | 0.5467 | 1.0412 | 1.0204 |
| 0.0373 | 9.5842 | 3872 | 1.0422 | 0.5467 | 1.0422 | 1.0209 |
| 0.0373 | 9.5891 | 3874 | 1.0422 | 0.5588 | 1.0422 | 1.0209 |
| 0.0373 | 9.5941 | 3876 | 1.0409 | 0.5709 | 1.0409 | 1.0203 |
| 0.0373 | 9.5990 | 3878 | 1.0387 | 0.5709 | 1.0387 | 1.0192 |
| 0.0373 | 9.6040 | 3880 | 1.0356 | 0.5709 | 1.0356 | 1.0176 |
| 0.0373 | 9.6089 | 3882 | 1.0323 | 0.5709 | 1.0323 | 1.0160 |
| 0.0373 | 9.6139 | 3884 | 1.0294 | 0.5709 | 1.0294 | 1.0146 |
| 0.0373 | 9.6188 | 3886 | 1.0271 | 0.5709 | 1.0271 | 1.0135 |
| 0.0373 | 9.6238 | 3888 | 1.0246 | 0.5709 | 1.0246 | 1.0122 |
| 0.0373 | 9.6287 | 3890 | 1.0226 | 0.5709 | 1.0226 | 1.0113 |
| 0.0373 | 9.6337 | 3892 | 1.0138 | 0.5709 | 1.0138 | 1.0069 |
| 0.0373 | 9.6386 | 3894 | 1.0056 | 0.5709 | 1.0056 | 1.0028 |
| 0.0373 | 9.6436 | 3896 | 0.9978 | 0.5709 | 0.9978 | 0.9989 |
| 0.0373 | 9.6485 | 3898 | 0.9929 | 0.5709 | 0.9929 | 0.9965 |
| 0.0373 | 9.6535 | 3900 | 0.9905 | 0.5709 | 0.9905 | 0.9952 |
| 0.0373 | 9.6584 | 3902 | 0.9888 | 0.5709 | 0.9888 | 0.9944 |
| 0.0373 | 9.6634 | 3904 | 0.9882 | 0.5709 | 0.9882 | 0.9941 |
| 0.0373 | 9.6683 | 3906 | 0.9877 | 0.5511 | 0.9877 | 0.9938 |
| 0.0373 | 9.6733 | 3908 | 0.9872 | 0.5511 | 0.9872 | 0.9936 |
| 0.0373 | 9.6782 | 3910 | 0.9876 | 0.5388 | 0.9876 | 0.9938 |
| 0.0373 | 9.6832 | 3912 | 0.9885 | 0.5388 | 0.9885 | 0.9942 |
| 0.0373 | 9.6881 | 3914 | 0.9888 | 0.5511 | 0.9888 | 0.9944 |
| 0.0373 | 9.6931 | 3916 | 0.9888 | 0.5511 | 0.9888 | 0.9944 |
| 0.0373 | 9.6980 | 3918 | 0.9886 | 0.5511 | 0.9886 | 0.9943 |
| 0.0373 | 9.7030 | 3920 | 0.9894 | 0.5511 | 0.9894 | 0.9947 |
| 0.0373 | 9.7079 | 3922 | 0.9916 | 0.5511 | 0.9916 | 0.9958 |
| 0.0373 | 9.7129 | 3924 | 0.9911 | 0.5709 | 0.9911 | 0.9955 |
| 0.0373 | 9.7178 | 3926 | 0.9914 | 0.5709 | 0.9914 | 0.9957 |
| 0.0373 | 9.7228 | 3928 | 0.9918 | 0.5709 | 0.9918 | 0.9959 |
| 0.0373 | 9.7277 | 3930 | 0.9931 | 0.5709 | 0.9931 | 0.9966 |
| 0.0373 | 9.7327 | 3932 | 0.9941 | 0.5709 | 0.9941 | 0.9970 |
| 0.0373 | 9.7376 | 3934 | 0.9942 | 0.5709 | 0.9942 | 0.9971 |
| 0.0373 | 9.7426 | 3936 | 0.9935 | 0.5709 | 0.9935 | 0.9967 |
| 0.0373 | 9.7475 | 3938 | 0.9932 | 0.5709 | 0.9932 | 0.9966 |
| 0.0373 | 9.7525 | 3940 | 0.9926 | 0.5709 | 0.9926 | 0.9963 |
| 0.0373 | 9.7574 | 3942 | 0.9919 | 0.5709 | 0.9919 | 0.9959 |
| 0.0373 | 9.7624 | 3944 | 0.9924 | 0.5709 | 0.9924 | 0.9962 |
| 0.0373 | 9.7673 | 3946 | 0.9934 | 0.5709 | 0.9934 | 0.9967 |
| 0.0373 | 9.7723 | 3948 | 0.9941 | 0.5709 | 0.9941 | 0.9971 |
| 0.0373 | 9.7772 | 3950 | 0.9948 | 0.5709 | 0.9948 | 0.9974 |
| 0.0373 | 9.7822 | 3952 | 0.9947 | 0.5709 | 0.9947 | 0.9974 |
| 0.0373 | 9.7871 | 3954 | 0.9930 | 0.5709 | 0.9930 | 0.9965 |
| 0.0373 | 9.7921 | 3956 | 0.9911 | 0.5511 | 0.9911 | 0.9956 |
| 0.0373 | 9.7970 | 3958 | 0.9889 | 0.5511 | 0.9889 | 0.9944 |
| 0.0373 | 9.8020 | 3960 | 0.9877 | 0.5511 | 0.9877 | 0.9938 |
| 0.0373 | 9.8069 | 3962 | 0.9852 | 0.5511 | 0.9852 | 0.9926 |
| 0.0373 | 9.8119 | 3964 | 0.9841 | 0.5511 | 0.9841 | 0.9920 |
| 0.0373 | 9.8168 | 3966 | 0.9834 | 0.5511 | 0.9834 | 0.9917 |
| 0.0373 | 9.8218 | 3968 | 0.9826 | 0.5511 | 0.9826 | 0.9913 |
| 0.0373 | 9.8267 | 3970 | 0.9822 | 0.5511 | 0.9822 | 0.9910 |
| 0.0373 | 9.8317 | 3972 | 0.9822 | 0.5511 | 0.9822 | 0.9911 |
| 0.0373 | 9.8366 | 3974 | 0.9825 | 0.5511 | 0.9825 | 0.9912 |
| 0.0373 | 9.8416 | 3976 | 0.9831 | 0.5511 | 0.9831 | 0.9915 |
| 0.0373 | 9.8465 | 3978 | 0.9832 | 0.5511 | 0.9832 | 0.9915 |
| 0.0373 | 9.8515 | 3980 | 0.9834 | 0.5511 | 0.9834 | 0.9917 |
| 0.0373 | 9.8564 | 3982 | 0.9841 | 0.5511 | 0.9841 | 0.9920 |
| 0.0373 | 9.8614 | 3984 | 0.9851 | 0.5511 | 0.9851 | 0.9925 |
| 0.0373 | 9.8663 | 3986 | 0.9851 | 0.5511 | 0.9851 | 0.9925 |
| 0.0373 | 9.8713 | 3988 | 0.9854 | 0.5511 | 0.9854 | 0.9926 |
| 0.0373 | 9.8762 | 3990 | 0.9854 | 0.5511 | 0.9854 | 0.9927 |
| 0.0373 | 9.8812 | 3992 | 0.9854 | 0.5511 | 0.9854 | 0.9927 |
| 0.0373 | 9.8861 | 3994 | 0.9856 | 0.5511 | 0.9856 | 0.9928 |
| 0.0373 | 9.8911 | 3996 | 0.9851 | 0.5511 | 0.9851 | 0.9925 |
| 0.0373 | 9.8960 | 3998 | 0.9846 | 0.5511 | 0.9846 | 0.9923 |
| 0.0348 | 9.9010 | 4000 | 0.9837 | 0.5511 | 0.9837 | 0.9918 |
| 0.0348 | 9.9059 | 4002 | 0.9829 | 0.5511 | 0.9829 | 0.9914 |
| 0.0348 | 9.9109 | 4004 | 0.9826 | 0.5511 | 0.9826 | 0.9913 |
| 0.0348 | 9.9158 | 4006 | 0.9823 | 0.5511 | 0.9823 | 0.9911 |
| 0.0348 | 9.9208 | 4008 | 0.9820 | 0.5511 | 0.9820 | 0.9910 |
| 0.0348 | 9.9257 | 4010 | 0.9817 | 0.5511 | 0.9817 | 0.9908 |
| 0.0348 | 9.9307 | 4012 | 0.9815 | 0.5511 | 0.9815 | 0.9907 |
| 0.0348 | 9.9356 | 4014 | 0.9817 | 0.5511 | 0.9817 | 0.9908 |
| 0.0348 | 9.9406 | 4016 | 0.9820 | 0.5511 | 0.9820 | 0.9910 |
| 0.0348 | 9.9455 | 4018 | 0.9827 | 0.5511 | 0.9827 | 0.9913 |
| 0.0348 | 9.9505 | 4020 | 0.9833 | 0.5511 | 0.9833 | 0.9916 |
| 0.0348 | 9.9554 | 4022 | 0.9835 | 0.5511 | 0.9835 | 0.9917 |
| 0.0348 | 9.9604 | 4024 | 0.9835 | 0.5511 | 0.9835 | 0.9917 |
| 0.0348 | 9.9653 | 4026 | 0.9836 | 0.5511 | 0.9836 | 0.9918 |
| 0.0348 | 9.9703 | 4028 | 0.9837 | 0.5511 | 0.9837 | 0.9918 |
| 0.0348 | 9.9752 | 4030 | 0.9837 | 0.5511 | 0.9837 | 0.9918 |
| 0.0348 | 9.9802 | 4032 | 0.9831 | 0.5511 | 0.9831 | 0.9915 |
| 0.0348 | 9.9851 | 4034 | 0.9827 | 0.5511 | 0.9827 | 0.9913 |
| 0.0348 | 9.9901 | 4036 | 0.9824 | 0.5511 | 0.9824 | 0.9912 |
| 0.0348 | 9.9950 | 4038 | 0.9822 | 0.5511 | 0.9822 | 0.9911 |
| 0.0348 | 10.0 | 4040 | 0.9821 | 0.5511 | 0.9821 | 0.9910 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
huggingtweets/nootropicguy | huggingtweets | "2023-04-18T18:55:18Z" | 111 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-04-17T23:50:09Z" | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1596337188270231553/eG5S-tRv_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">NootBro</div>
<div style="text-align: center; font-size: 14px;">@nootropicguy</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from NootBro.
| Data | NootBro |
| --- | --- |
| Tweets downloaded | 2516 |
| Retweets | 253 |
| Short tweets | 472 |
| Tweets kept | 1791 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kessm8a4/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @nootropicguy's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/vwpnbram) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/vwpnbram/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/nootropicguy')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
isspek/xlnet-base-cased_zika_gpt4o_2_2e-5_16_undersampling_0.5 | isspek | "2024-12-21T10:17:31Z" | 120 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-21T10:17:13Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MinaMila/phi3_GermanCredit_cfda_9ep_42 | MinaMila | "2025-03-18T19:18:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:finetune:unsloth/Phi-3.5-mini-instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-18T19:15:35Z" | ---
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3.5-mini-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
patpatng/DeepSeek-R1-Medical-COT-Qwen-7B_q4_k_m | patpatng | "2025-03-03T04:48:32Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-03T03:00:39Z" | ---
base_model: unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** patpatng
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
saudsaleem/qwen-8b-12 | saudsaleem | "2025-03-19T10:32:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"autotrain",
"text-generation-inference",
"peft",
"conversational",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-7B-Instruct",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-19T08:59:39Z" | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: Qwen/Qwen2-7B-Instruct
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
mradermacher/Agent-7b-v1-128k-GGUF | mradermacher | "2024-05-06T04:56:40Z" | 18 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:m-a-p/Code-Feedback",
"base_model:Vezora/Agent-7b-v1-128k",
"base_model:quantized:Vezora/Agent-7b-v1-128k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-04-14T09:04:28Z" | ---
base_model: Vezora/Agent-7b-v1-128k
datasets:
- m-a-p/Code-Feedback
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Vezora/Agent-7b-v1-128k
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Agent-7b-v1-128k-GGUF/resolve/main/Agent-7b-v1-128k.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Agent-7b-v1-128k-GGUF/resolve/main/Agent-7b-v1-128k.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Agent-7b-v1-128k-GGUF/resolve/main/Agent-7b-v1-128k.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Agent-7b-v1-128k-GGUF/resolve/main/Agent-7b-v1-128k.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Agent-7b-v1-128k-GGUF/resolve/main/Agent-7b-v1-128k.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Agent-7b-v1-128k-GGUF/resolve/main/Agent-7b-v1-128k.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Agent-7b-v1-128k-GGUF/resolve/main/Agent-7b-v1-128k.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Agent-7b-v1-128k-GGUF/resolve/main/Agent-7b-v1-128k.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Agent-7b-v1-128k-GGUF/resolve/main/Agent-7b-v1-128k.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Agent-7b-v1-128k-GGUF/resolve/main/Agent-7b-v1-128k.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Agent-7b-v1-128k-GGUF/resolve/main/Agent-7b-v1-128k.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Agent-7b-v1-128k-GGUF/resolve/main/Agent-7b-v1-128k.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Agent-7b-v1-128k-GGUF/resolve/main/Agent-7b-v1-128k.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Agent-7b-v1-128k-GGUF/resolve/main/Agent-7b-v1-128k.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
bsoviedo/es_colner_added_tw_nl | bsoviedo | "2025-03-13T00:58:51Z" | 0 | 0 | spacy | [
"spacy",
"token-classification",
"es",
"model-index",
"region:us"
] | token-classification | "2025-03-13T00:50:49Z" | ---
tags:
- spacy
- token-classification
language:
- es
model-index:
- name: es_colner_added_tw_nl
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8744939271
- name: NER Recall
type: recall
value: 0.864
- name: NER F Score
type: f_score
value: 0.8692152918
---
| Feature | Description |
| --- | --- |
| **Name** | `es_COLNER_added_tw_NL` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.8.4,<3.9.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `LOC` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 86.92 |
| `ENTS_P` | 87.45 |
| `ENTS_R` | 86.40 |
| `TRANSFORMER_LOSS` | 11147.80 |
| `NER_LOSS` | 17874.86 | |
prithivMLmods/Messier-Opus-14B-Sm8 | prithivMLmods | "2025-03-08T04:40:51Z" | 0 | 3 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"code",
"StreamlinedMemory",
"trl",
"sft",
"conversational",
"en",
"zh",
"base_model:Qwen/Qwen2.5-14B-Instruct-1M",
"base_model:finetune:Qwen/Qwen2.5-14B-Instruct-1M",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-06T03:25:52Z" | ---
license: apache-2.0
language:
- en
- zh
base_model:
- Qwen/Qwen2.5-14B-Instruct-1M
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- code
- StreamlinedMemory
- trl
- sft
---

# **Messier-Opus-14B-Sm8**
> Messier-Opus-14B-Sm8 is based on the Qwen 2.5 14B modality architecture, designed to enhance coding efficiency and computational reasoning. This model is optimized for streamlined memory usage, avoiding unwanted textual token generation, and excelling in coding, explanatory reasoning, mathematical problem-solving, and technical tasks. It has been fine-tuned using specialized datasets to improve code generation, structured programming logic, and problem-solving capabilities.
## **Key Improvements**
1. **Optimized for Coding**: The model specializes in generating high-quality, structured code with minimal redundant tokens, ensuring efficient execution.
2. **Enhanced Memory Utilization**: Implements streamlined memory optimization to reduce computational overhead and improve performance.
3. **Superior Reasoning Capabilities**: Excels in solving complex mathematical and algorithmic problems with logical and structured explanations.
4. **Long-Context Support**: Supports up to 128K tokens for input context and can generate up to 8K tokens in a single output, making it ideal for detailed coding responses.
5. **Reduced Unwanted Textual Tokens**: Ensures a more focused output for coding tasks by minimizing excessive textual responses.
## **Quickstart with transformers**
Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and generate content:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Messier-Opus-14B-Sm8"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Write a Python function to find the Fibonacci sequence."
messages = [
{"role": "system", "content": "You are an advanced coding assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## **Intended Use**
1. **Code Generation & Optimization**:
Designed for developers, assisting in writing, refactoring, and optimizing code across multiple programming languages.
2. **Algorithm & Mathematical Problem Solving**:
Provides precise explanations and solutions for computational and mathematical problems.
3. **Technical Explanations & Documentation**:
Generates clear and structured explanations for coding concepts, libraries, and APIs.
4. **Debugging Assistance**:
Helps analyze code snippets, detect errors, and suggest corrections.
5. **Educational Use**:
Assists students and learners by breaking down complex programming topics into easily understandable sections.
6. **Structured Data Processing**:
Capable of analyzing and generating structured outputs, such as JSON, XML, and tables, making it ideal for data science applications.
## **Limitations**
1. **Hardware Requirements**:
Requires high-memory GPUs or TPUs due to its large parameter size and long-context support.
2. **Potential Bias in Responses**:
While designed to be neutral, outputs may still reflect biases present in training data.
3. **Inconsistent Outputs in Creative Tasks**:
May produce variable results in storytelling and non-technical topics.
4. **Limited Real-World Awareness**:
Does not have access to real-time events beyond its training cutoff.
5. **Error Propagation in Extended Outputs**:
Minor errors in early responses may affect overall coherence in long-form code outputs.
6. **Prompt Sensitivity**:
The effectiveness of responses may depend on how well the input prompt is structured. |
TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ | TheBloke | "2023-11-23T02:13:02Z" | 25 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"base_model:uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85",
"base_model:quantized:uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-11-23T01:44:35Z" | ---
base_model: uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85
inference: false
license: llama2
model_creator: Jiangwen Su
model_name: Collectivecognition V1.1 Mistral 7B Dare 0.85
model_type: mistral
prompt_template: "User: {prompt}\nAssistant: \n"
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Collectivecognition V1.1 Mistral 7B Dare 0.85 - GPTQ
- Model creator: [Jiangwen Su](https://huggingface.co/uukuguy)
- Original model: [Collectivecognition V1.1 Mistral 7B Dare 0.85](https://huggingface.co/uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Jiangwen Su's Collectivecognition V1.1 Mistral 7B Dare 0.85](https://huggingface.co/uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GGUF)
* [Jiangwen Su's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/uukuguy/CollectiveCognition-v1.1-Mistral-7B-dare-0.85)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: User-Assistant-lc
```
User: {prompt}
Assistant:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [open-instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.29 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ`:
```shell
mkdir CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ
huggingface-cli download TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ --local-dir CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ
huggingface-cli download TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ --local-dir CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''User: {prompt}
Assistant:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/CollectiveCognition-v1.1-Mistral-7B-dare-0.85-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''User: {prompt}
Assistant:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Jiangwen Su's Collectivecognition V1.1 Mistral 7B Dare 0.85
Experiment for DARE(Drop and REscale), most of the delta parameters can be directly set to zeros without affecting the capabilities of SFT LMs and larger models can tolerate a higher proportion of discarded parameters.
weight_mask_rate: 0.85 / use_weight_rescale: True / mask_stratery: random / scaling_coefficient: 1.0
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | DROP |
| ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |
| Intel/neural-chat-7b-v3-1 | 59.06 | 66.21 | 83.64 | 62.37 | 59.65 | 78.14 | 19.56 | 43.84 |
| migtissera/SynthIA-7B-v1.3 | 57.11 | 62.12 | 83.45 | 62.65 | 51.37 | 78.85 | 17.59 | 43.76 |
| bhenrym14/mistral-7b-platypus-fp16 | 56.89 | 63.05 | 84.15 | 64.11 | 45.07 | 78.53 | 17.36 | 45.92 |
| jondurbin/airoboros-m-7b-3.1.2 | 56.24 | 61.86 | 83.51 | 61.91 | 53.75 | 77.58 | 13.87 | 41.2 |
| uukuguy/speechless-code-mistral-orca-7b-v1.0 | 55.33 | 59.64 | 82.25 | 61.33 | 48.45 | 77.51 | 8.26 | 49.89 |
| teknium/CollectiveCognition-v1.1-Mistral-7B | 53.87 | 62.12 | 84.17 | 62.35 | 57.62 | 75.37 | 15.62 | 19.85 |
| Open-Orca/Mistral-7B-SlimOrca | 53.34 | 62.54 | 83.86 | 62.77 | 54.23 | 77.43 | 21.38 | 11.2 |
| uukuguy/speechless-mistral-dolphin-orca-platypus-samantha-7b | 53.34 | 64.33 | 84.4 | 63.72 | 52.52 | 78.37 | 21.38 | 8.66 |
| ehartford/dolphin-2.2.1-mistral-7b | 53.06 | 63.48 | 83.86 | 63.28 | 53.17 | 78.37 | 21.08 | 8.19 |
| teknium/CollectiveCognition-v1-Mistral-7B | 52.55 | 62.37 | 85.5 | 62.76 | 54.48 | 77.58 | 17.89 | 7.22 |
| HuggingFaceH4/zephyr-7b-alpha | 52.4 | 61.01 | 84.04 | 61.39 | 57.9 | 78.61 | 14.03 | 9.82 |
| ehartford/samantha-1.2-mistral-7b | 52.16 | 64.08 | 85.08 | 63.91 | 50.4 | 78.53 | 16.98 | 6.13 |
|
kenzic/distilhubert-finetuned-gtzan-2 | kenzic | "2024-08-13T17:02:50Z" | 6 | 0 | null | [
"tensorboard",
"safetensors",
"hubert",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"region:us"
] | null | "2024-08-13T14:05:40Z" | ---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan-2
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan-2
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6290
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9857 | 1.0 | 90 | 1.8850 | 0.56 |
| 1.2735 | 2.0 | 180 | 1.3243 | 0.64 |
| 1.0297 | 3.0 | 270 | 1.0371 | 0.7 |
| 0.6856 | 4.0 | 360 | 0.9535 | 0.74 |
| 0.5659 | 5.0 | 450 | 0.7661 | 0.78 |
| 0.4125 | 6.0 | 540 | 0.6502 | 0.81 |
| 0.3883 | 7.0 | 630 | 0.6516 | 0.83 |
| 0.2705 | 8.0 | 720 | 0.6270 | 0.81 |
| 0.2147 | 9.0 | 810 | 0.6383 | 0.83 |
| 0.17 | 10.0 | 900 | 0.6290 | 0.83 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
FinancialSupport/saiga-7b | FinancialSupport | "2024-03-25T21:31:58Z" | 4,199 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"it",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-28T16:31:58Z" | ---
language:
- it
license: apache-2.0
model-index:
- name: saiga-7b
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 63.14
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.14
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.66
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 54.99
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.01
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FinancialSupport/saiga-7b
name: Open LLM Leaderboard
---
il saiga è uno strano incrocio di antilopi che vive nelle steppe siberiane.
Il nome deriva dal fatto che è un parente di fauno/camoscio e un lontano cugino di cerbero (altri modelli open source ita).
E' un progetto portato avanti nei weekend con pochi soldi/tempo a disposizione

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FinancialSupport__saiga-7b)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.51|
|AI2 Reasoning Challenge (25-Shot)|63.14|
|HellaSwag (10-Shot) |83.14|
|MMLU (5-Shot) |61.66|
|TruthfulQA (0-shot) |54.99|
|Winogrande (5-shot) |79.01|
|GSM8k (5-shot) |45.11|
|
TheBloke/DareVox-7B-GGUF | TheBloke | "2024-01-17T22:02:26Z" | 111 | 2 | transformers | [
"transformers",
"gguf",
"mistral",
"merge",
"mergekit",
"lazymergekit",
"teknium/OpenHermes-2.5-Mistral-7B",
"abacusai/Slerp-CM-mist-dpo",
"berkeley-nest/Starling-LM-7B-alpha",
"base_model:abideen/DareVox-7B",
"base_model:quantized:abideen/DareVox-7B",
"license:apache-2.0",
"region:us"
] | null | "2024-01-17T21:50:36Z" | ---
base_model: abideen/DareVox-7B
inference: false
license: apache-2.0
model_creator: Zain ul Abideen
model_name: DareVox 7B
model_type: mistral
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- merge
- mergekit
- lazymergekit
- teknium/OpenHermes-2.5-Mistral-7B
- abacusai/Slerp-CM-mist-dpo
- berkeley-nest/Starling-LM-7B-alpha
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# DareVox 7B - GGUF
- Model creator: [Zain ul Abideen](https://huggingface.co/abideen)
- Original model: [DareVox 7B](https://huggingface.co/abideen/DareVox-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Zain ul Abideen's DareVox 7B](https://huggingface.co/abideen/DareVox-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/DareVox-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/DareVox-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/DareVox-7B-GGUF)
* [Zain ul Abideen's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/abideen/DareVox-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [darevox-7b.Q2_K.gguf](https://huggingface.co/TheBloke/DareVox-7B-GGUF/blob/main/darevox-7b.Q2_K.gguf) | Q2_K | 2 | 2.72 GB| 5.22 GB | significant quality loss - not recommended for most purposes |
| [darevox-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/DareVox-7B-GGUF/blob/main/darevox-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [darevox-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/DareVox-7B-GGUF/blob/main/darevox-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [darevox-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/DareVox-7B-GGUF/blob/main/darevox-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [darevox-7b.Q4_0.gguf](https://huggingface.co/TheBloke/DareVox-7B-GGUF/blob/main/darevox-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [darevox-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/DareVox-7B-GGUF/blob/main/darevox-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [darevox-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/DareVox-7B-GGUF/blob/main/darevox-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [darevox-7b.Q5_0.gguf](https://huggingface.co/TheBloke/DareVox-7B-GGUF/blob/main/darevox-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [darevox-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/DareVox-7B-GGUF/blob/main/darevox-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [darevox-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/DareVox-7B-GGUF/blob/main/darevox-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [darevox-7b.Q6_K.gguf](https://huggingface.co/TheBloke/DareVox-7B-GGUF/blob/main/darevox-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [darevox-7b.Q8_0.gguf](https://huggingface.co/TheBloke/DareVox-7B-GGUF/blob/main/darevox-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/DareVox-7B-GGUF and below it, a specific filename to download, such as: darevox-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/DareVox-7B-GGUF darevox-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/DareVox-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/DareVox-7B-GGUF darevox-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m darevox-7b.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./darevox-7b.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./darevox-7b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Zain ul Abideen's DareVox 7B
# DareVox-7B
DareVox-7B is a merge of the following models:
* [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)
* [abacusai/Slerp-CM-mist-dpo](https://huggingface.co/abacusai/Slerp-CM-mist-dpo)
* [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: teknium/OpenHermes-2.5-Mistral-7B
parameters:
density: 0.53
weight: 0.4
- model: abacusai/Slerp-CM-mist-dpo
parameters:
density: 0.53
weight: 0.3
- model: berkeley-nest/Starling-LM-7B-alpha
parameters:
density: 0.5
weight: 0.4
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "abideen/DareVox-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
<!-- original-model-card end -->
|
Mag0g/Ezekiel27_2 | Mag0g | "2024-05-14T12:15:36Z" | 128 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-14T12:14:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
datlaaaaaaa/6b319281-304b-48d5-911f-78c6d5201d27 | datlaaaaaaa | "2025-01-30T03:56:09Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M",
"base_model:adapter:unsloth/SmolLM-360M",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-30T03:08:29Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6b319281-304b-48d5-911f-78c6d5201d27
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 932b975fca203429_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/932b975fca203429_train_data.json
type:
field_input: note
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: datlaaaaaaa/6b319281-304b-48d5-911f-78c6d5201d27
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/932b975fca203429_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 192f06f0-5909-42fe-bc5f-7c55cc9d7e7c
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 192f06f0-5909-42fe-bc5f-7c55cc9d7e7c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6b319281-304b-48d5-911f-78c6d5201d27
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1469 | 0.0107 | 200 | 1.0261 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ouhenio/tito-sft | ouhenio | "2025-01-05T02:58:40Z" | 150 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:ouhenio/tito",
"base_model:finetune:ouhenio/tito",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-05T00:45:42Z" | ---
base_model: ouhenio/tito
library_name: transformers
model_name: tito-sft
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for tito-sft
This model is a fine-tuned version of [ouhenio/tito](https://huggingface.co/ouhenio/tito).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ouhenio/tito-sft", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ouhenio/huggingface/runs/gjdgyvv2)
This model was trained with SFT.
### Framework versions
- TRL: 0.13.0
- Transformers: 4.47.1
- Pytorch: 2.1.2
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
islexu/pretrained_model | islexu | "2025-03-05T17:00:53Z" | 0 | 0 | null | [
"safetensors",
"license:apache-2.0",
"region:us"
] | null | "2025-03-05T16:56:15Z" | ---
license: apache-2.0
---
|
AerinK/NotSoXJB-Mix-1 | AerinK | "2023-04-15T13:36:00Z" | 0 | 26 | null | [
"text-to-image",
"stable-diffusion",
"en",
"dataset:Nerfgun3/bad_prompt",
"license:openrail",
"region:us"
] | text-to-image | "2023-04-02T12:28:12Z" | ---
license: openrail
datasets:
- Nerfgun3/bad_prompt
language:
- en
tags:
- text-to-image
- stable-diffusion
---
(Yes I'm mimicking how WarriorMama777 doing this page. But I don't really know how, and I'm too lazy to learn this)
↓Licence
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
You can't use the model to deliberately produce nor share illegal or harmful outputs or content
The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here :https://huggingface.co/spaces/CompVis/stable-diffusion-license
Terms of use
Clearly indicate where modifications have been made.
If you used it for merging, please state what steps you took to do so.
↓Disclaimer
READ MORE: Disclaimer
The user has complete control over whether or not to generate NSFW content, and the user's decision to enjoy either SFW or NSFW is entirely up to the user.The learning model does not contain any obscene visual content that can be viewed with a single click.The posting of the Learning Model is not intended to display obscene material in a public place. In publishing examples of the generation of copyrighted characters, I consider the following cases to be exceptional cases in which unauthorised use is permitted. "when the use is for private use or research purposes; when the work is used as material for merchandising (however, this does not apply when the main use of the work is to be merchandised); when the work is used in criticism, commentary or news reporting; when the work is used as a parody or derivative work to demonstrate originality." In these cases, use against the will of the copyright holder or use for unjustified gain should still be avoided, and if a complaint is lodged by the copyright holder, it is guaranteed that the publication will be stopped as soon as possible. I would also like to note that I am aware of the fact that many of the merged models use NAI, which is learned from Danbooru and other sites that could be interpreted as illegal, and whose model data itself is also a leak, and that this should be watched carefully. I believe that the best we can do is to expand the possibilities of GenerativeAI while protecting the works of illustrators and artists.
↓About
The main model, “NSX-1(NotSoXJBMix-1)”, is a merged model that generates high quality anime style pictures. This model can generate a wide variety of content. Hope this model can help you visualize your imagination.

“Hope everyone can visualize their imagination。”
↓NSX1
Features:High quality,anime illustration style。
1.Normally won’t generate NSFW pictures. 。
2.Fully support of NSFW generation.
In addition, thanks to excellent extensions such asModelToolkit”https://github.com/arenatemp/stable-diffusion-webui-model-toolkit“
(Otherwise this model could be 5~6GB)。
↓Variations
NSX1A
Features :More flat style. I like to apply anime character LoRA with this.
NSX1B
Features :More coloful, more pastel style.
NSX1C
Features :More realistic light and shadow, more realistic texture. Close to AOM3
NSX1D
Features : Added pastel-mix. A plus version of NSX1B.
NSX1Night
Features : More nsfw atmosphere
NSX1EzBackground
Features: Can generate a illustration with background even if you are bad at prompting background prompts.
More


When generating illustrations for the general public: write "nsfw" in the negative prompt field
When generating adult illustrations: "nsfw" in the positive prompt field -> It can be generated without putting it in. If you include it, the atmosphere will be more NSFW.
↓Gallery

More examples:https://civitai.com/models/28200/notsoxjbmix-1
↓How to use
Prompts
I’m using EasyNegative and badhandsv4 for negative prompts.(or just use”(worst quality, low quality:1.4)”)
How to avoid Bokeh (depth of field, bokeh, blurry:1.4)
How to remove mosaic:(censored, mosaic censoring, bar censor, convenient censoring, pointless censoring:1.0)
How to remove blush:(blush, embarrassed, nose blush, light blush, full-face blush:1.4)
How to remove NSFW effects:(trembling, motion lines, motion blur, emphasis lines:1.2)
Sampler:Pick your own choice
Steps:(Not too high)
DPM++ SDE Karras: 20 to 40
DPM++ 2M Karras: 28 to 40
Eular A:30 to 50
Clipskip: 1 or 2
CFG: 6 to 12
Upscaler:
Detailed illust:Latenet(nearest-exact)/RealESRGAN_4xplus_anime_6B
Denoise strength:0.5~0.6
Simple upscale:Whatever
Model details / Recipe
↓New Hash(short)
NSX-1.safetensors
8ee9ff7d94
NSX-1A-purned.safetensors
5c9f713a34
NSX-1B-purned.safetensors
536eab3410
NSX-1C-purned.safetensors
ba9f4f9007
NSX-1D-purned.safetensors
c0edebdde7
NotSoXJB-1Night-purned.safetensors
e4c8f27226
↓Use Models
(new short Hash)
1.AOM3A1B.safetensors
[5493a0ec49]
2.Counterfeit-V2.5_pruned.safetensors
[a074b8864e]
3.viewerMixV17_viewerMixV17V2.safetensors
[c47e3a94e9]
4.nyanMix_230303Absurd2.safetensors
[8ac3e79e96]
5.nightSkyYOZORAStyle_yozoraV1PurnedFp16.safetensors
[4b118b2d1b]
6.colorBoxModel_colorBOX.safetensors
[93a20525f5]
7.9527_v10.ckpt
[40a9f4ec37]
8.furnace34_furnace34.safetensors
[c0653dd6d0]
9.pastelmix.safetensors
[fa818fcf2c]
10.AOM3_aom3a3.safetensors
[eb4099ba9c]
11.hassakuHentaiModel_hassakuv1.safetensors
[df614cd3c2]
↓NSX1
Step1:

Step2:

Step3:

Step4:

Step5:

Step6:

↓NSX1A

↓NSX1B
NotSoXJB + (9527_v10-NotSoXJB-1) - 0.45 Add difference
↓NSX1C
NotSoXJB-1 + (AOM3-NotSoXJB-1) - 0.6 Add difference
↓NSX1D

↓NSX1Night
NotSoXJB + (hassakuHentaiModel-NotSoXJB-1) - 0.45 Add difference
↓NSX1EzBackground
NotSoXJB + (CounterfeitV2.5-pruned-NotSoXJB-1) - 0.3 Add difference |
SungWei/my_awesome_billsum_model | SungWei | "2023-08-05T11:30:03Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-08-05T09:34:54Z" | ---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.2033
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6638
- Rouge1: 0.2033
- Rouge2: 0.1149
- Rougel: 0.1762
- Rougelsum: 0.1759
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 248 | 1.9584 | 0.1999 | 0.1073 | 0.1716 | 0.1717 | 19.0 |
| No log | 2.0 | 496 | 1.8621 | 0.195 | 0.1045 | 0.1685 | 0.1682 | 19.0 |
| 2.2512 | 3.0 | 744 | 1.8095 | 0.1973 | 0.1109 | 0.1728 | 0.1727 | 19.0 |
| 2.2512 | 4.0 | 992 | 1.7797 | 0.1989 | 0.1102 | 0.1724 | 0.1724 | 19.0 |
| 1.8144 | 5.0 | 1240 | 1.7505 | 0.1997 | 0.112 | 0.1735 | 0.1736 | 19.0 |
| 1.8144 | 6.0 | 1488 | 1.7308 | 0.2003 | 0.1134 | 0.1746 | 0.1744 | 19.0 |
| 1.6898 | 7.0 | 1736 | 1.7145 | 0.199 | 0.1114 | 0.1732 | 0.173 | 19.0 |
| 1.6898 | 8.0 | 1984 | 1.7083 | 0.1977 | 0.1106 | 0.1718 | 0.1716 | 19.0 |
| 1.5997 | 9.0 | 2232 | 1.6983 | 0.2014 | 0.1127 | 0.175 | 0.175 | 19.0 |
| 1.5997 | 10.0 | 2480 | 1.6923 | 0.2014 | 0.1153 | 0.1754 | 0.1753 | 19.0 |
| 1.5403 | 11.0 | 2728 | 1.6826 | 0.2009 | 0.1134 | 0.1752 | 0.1751 | 19.0 |
| 1.5403 | 12.0 | 2976 | 1.6768 | 0.2003 | 0.1125 | 0.1745 | 0.1744 | 19.0 |
| 1.491 | 13.0 | 3224 | 1.6722 | 0.2016 | 0.1146 | 0.1756 | 0.1755 | 19.0 |
| 1.491 | 14.0 | 3472 | 1.6750 | 0.2039 | 0.1164 | 0.1773 | 0.177 | 19.0 |
| 1.4496 | 15.0 | 3720 | 1.6679 | 0.2023 | 0.1147 | 0.1765 | 0.1763 | 19.0 |
| 1.4496 | 16.0 | 3968 | 1.6677 | 0.2032 | 0.1148 | 0.177 | 0.1768 | 19.0 |
| 1.4241 | 17.0 | 4216 | 1.6640 | 0.2021 | 0.1135 | 0.1752 | 0.175 | 19.0 |
| 1.4241 | 18.0 | 4464 | 1.6645 | 0.2027 | 0.1155 | 0.1766 | 0.1764 | 19.0 |
| 1.4025 | 19.0 | 4712 | 1.6632 | 0.2028 | 0.1149 | 0.1761 | 0.1757 | 19.0 |
| 1.4025 | 20.0 | 4960 | 1.6638 | 0.2033 | 0.1149 | 0.1762 | 0.1759 | 19.0 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1
- Datasets 2.14.1
- Tokenizers 0.13.3
|
chew-models/no_overlap_v1_chew_distilbert-base-uncased_change_nochange_noov_v2 | chew-models | "2025-03-31T23:03:35Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-31T21:47:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/J.O.S.I.E.-x-Qwen1.5-4B-Chat-GGUF | mradermacher | "2024-11-07T19:32:35Z" | 13 | 1 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-06T18:59:32Z" | ---
base_model: Goekdeniz-Guelmez/J.O.S.I.E.-x-Qwen1.5-4B-Chat
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Goekdeniz-Guelmez/J.O.S.I.E.-x-Qwen1.5-4B-Chat
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/J.O.S.I.E.-x-Qwen1.5-4B-Chat-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/J.O.S.I.E.-x-Qwen1.5-4B-Chat-GGUF/resolve/main/J.O.S.I.E.-x-Qwen1.5-4B-Chat.Q2_K.gguf) | Q2_K | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/J.O.S.I.E.-x-Qwen1.5-4B-Chat-GGUF/resolve/main/J.O.S.I.E.-x-Qwen1.5-4B-Chat.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/J.O.S.I.E.-x-Qwen1.5-4B-Chat-GGUF/resolve/main/J.O.S.I.E.-x-Qwen1.5-4B-Chat.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/J.O.S.I.E.-x-Qwen1.5-4B-Chat-GGUF/resolve/main/J.O.S.I.E.-x-Qwen1.5-4B-Chat.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/J.O.S.I.E.-x-Qwen1.5-4B-Chat-GGUF/resolve/main/J.O.S.I.E.-x-Qwen1.5-4B-Chat.IQ4_XS.gguf) | IQ4_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/J.O.S.I.E.-x-Qwen1.5-4B-Chat-GGUF/resolve/main/J.O.S.I.E.-x-Qwen1.5-4B-Chat.Q4_K_S.gguf) | Q4_K_S | 2.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/J.O.S.I.E.-x-Qwen1.5-4B-Chat-GGUF/resolve/main/J.O.S.I.E.-x-Qwen1.5-4B-Chat.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/J.O.S.I.E.-x-Qwen1.5-4B-Chat-GGUF/resolve/main/J.O.S.I.E.-x-Qwen1.5-4B-Chat.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/J.O.S.I.E.-x-Qwen1.5-4B-Chat-GGUF/resolve/main/J.O.S.I.E.-x-Qwen1.5-4B-Chat.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/J.O.S.I.E.-x-Qwen1.5-4B-Chat-GGUF/resolve/main/J.O.S.I.E.-x-Qwen1.5-4B-Chat.Q6_K.gguf) | Q6_K | 3.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/J.O.S.I.E.-x-Qwen1.5-4B-Chat-GGUF/resolve/main/J.O.S.I.E.-x-Qwen1.5-4B-Chat.Q8_0.gguf) | Q8_0 | 4.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
berksaltuk/room-classifier | berksaltuk | "2024-07-29T13:18:56Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-07-29T13:18:50Z" | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: room-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9111111164093018
---
# room-classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bathroom

#### bedroom

#### kitchen room

#### living room
 |
jjenny/dfsdfsfs222 | jjenny | "2023-06-23T04:29:58Z" | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | "2023-06-23T00:24:35Z" | ---
license: bigscience-openrail-m
---
|
needzoo/unsloth_finetune | needzoo | "2025-04-16T18:25:35Z" | 0 | 0 | transformers | [
"transformers",
"mllama",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-04-16T18:21:15Z" | ---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** needzoo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hoanbklucky/dinov2-small-imagenet1k-1-layer-finetuned-noh | hoanbklucky | "2024-12-27T20:47:10Z" | 149 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"dinov2",
"image-classification",
"generated_from_trainer",
"base_model:facebook/dinov2-small-imagenet1k-1-layer",
"base_model:finetune:facebook/dinov2-small-imagenet1k-1-layer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-12-16T16:33:33Z" | ---
library_name: transformers
license: apache-2.0
base_model: facebook/dinov2-small-imagenet1k-1-layer
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dinov2-small-imagenet1k-1-layer-finetuned-noh
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dinov2-small-imagenet1k-1-layer-finetuned-noh
This model is a fine-tuned version of [facebook/dinov2-small-imagenet1k-1-layer](https://huggingface.co/facebook/dinov2-small-imagenet1k-1-layer) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3366
- Accuracy: 0.8982
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.4924 | 1.0 | 23 | 0.5212 | 0.8325 |
| 0.5732 | 2.0 | 46 | 0.3366 | 0.8982 |
| 0.5639 | 3.0 | 69 | 0.3907 | 0.8489 |
| 0.4759 | 4.0 | 92 | 0.3482 | 0.8818 |
| 0.3757 | 5.0 | 115 | 0.3921 | 0.8276 |
| 0.3356 | 6.0 | 138 | 0.3184 | 0.8966 |
| 0.2521 | 7.0 | 161 | 0.3992 | 0.8571 |
| 0.2981 | 8.0 | 184 | 0.3904 | 0.8703 |
| 0.2302 | 9.0 | 207 | 0.3987 | 0.8719 |
| 0.1979 | 9.5778 | 220 | 0.4129 | 0.8604 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1
- Datasets 2.19.1
- Tokenizers 0.21.0
|
sulinliu/ddpd | sulinliu | "2025-02-15T17:15:03Z" | 0 | 0 | null | [
"text-generation",
"arxiv:2410.06264",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-10-08T13:08:50Z" | ---
license: apache-2.0
pipeline_tag: text-generation
---
This repository contains the weights of the paper [Think While You Generate: Discrete Diffusion with Planned Denoising](https://huggingface.co/papers/2410.06264). |
M4A1TasteGood/finetune-instance-segmentation-ade20k-mini-mask2former | M4A1TasteGood | "2024-11-13T05:42:11Z" | 39 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mask2former",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | "2024-11-13T04:57:08Z" | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: finetune-instance-segmentation-ade20k-mini-mask2former
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune-instance-segmentation-ade20k-mini-mask2former
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.5549
- Map: 1.0
- Map 50: 1.0
- Map 75: 1.0
- Map Small: -1.0
- Map Medium: -1.0
- Map Large: 1.0
- Mar 1: 1.0
- Mar 10: 1.0
- Mar 100: 1.0
- Mar Small: -1.0
- Mar Medium: -1.0
- Mar Large: 1.0
- Map Node 0: 1.0
- Mar 100 Node 0: 1.0
- Map Node 1: -1.0
- Mar 100 Node 1: -1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Node 0 | Mar 100 Node 0 | Map Node 1 | Mar 100 Node 1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:----------:|:---------:|:-----:|:------:|:-------:|:---------:|:----------:|:---------:|:----------:|:--------------:|:----------:|:--------------:|
| 12.0634 | 1.0 | 1 | 34.0342 | 0.0388 | 0.1429 | 0.0128 | -1.0 | -1.0 | 0.083 | 0.0 | 0.2 | 1.0 | -1.0 | -1.0 | 1.0 | 0.0388 | 1.0 | -1.0 | -1.0 |
| 14.6891 | 2.0 | 2 | 27.5947 | 0.0378 | 0.125 | 0.0159 | -1.0 | -1.0 | 0.0707 | 0.0 | 0.2 | 1.0 | -1.0 | -1.0 | 1.0 | 0.0378 | 1.0 | -1.0 | -1.0 |
| 12.1372 | 3.0 | 3 | 24.3069 | 0.0538 | 0.2 | 0.0172 | -1.0 | -1.0 | 0.225 | 0.0 | 0.2 | 1.0 | -1.0 | -1.0 | 1.0 | 0.0538 | 1.0 | -1.0 | -1.0 |
| 10.1768 | 4.0 | 4 | 21.7997 | 0.0325 | 0.0909 | 0.0179 | -1.0 | -1.0 | 0.0952 | 0.0 | 0.0 | 1.0 | -1.0 | -1.0 | 1.0 | 0.0325 | 1.0 | -1.0 | -1.0 |
| 10.9674 | 5.0 | 5 | 20.0476 | 0.0507 | 0.1667 | 0.0185 | -1.0 | -1.0 | 0.1833 | 0.0 | 0.3 | 1.0 | -1.0 | -1.0 | 1.0 | 0.0507 | 1.0 | -1.0 | -1.0 |
| 8.2576 | 6.0 | 6 | 19.0059 | 0.0582 | 0.25 | 0.0185 | -1.0 | -1.0 | 0.2054 | 0.0 | 0.2 | 1.0 | -1.0 | -1.0 | 1.0 | 0.0582 | 1.0 | -1.0 | -1.0 |
| 8.2583 | 7.0 | 7 | 18.1974 | 0.0629 | 0.3333 | 0.0204 | -1.0 | -1.0 | 0.195 | 0.0 | 0.2 | 1.0 | -1.0 | -1.0 | 1.0 | 0.0629 | 1.0 | -1.0 | -1.0 |
| 7.3192 | 8.0 | 8 | 17.3621 | 0.1302 | 0.5 | 0.0435 | -1.0 | -1.0 | 0.2841 | 0.0 | 0.2 | 1.0 | -1.0 | -1.0 | 1.0 | 0.1302 | 1.0 | -1.0 | -1.0 |
| 7.0464 | 9.0 | 9 | 16.5316 | 0.2514 | 1.0 | 0.0625 | -1.0 | -1.0 | 0.3417 | 0.2 | 0.2 | 1.0 | -1.0 | -1.0 | 1.0 | 0.2514 | 1.0 | -1.0 | -1.0 |
| 6.8925 | 10.0 | 10 | 15.5454 | 0.2157 | 1.0 | 0.0196 | -1.0 | -1.0 | 0.2333 | 0.2 | 0.2 | 1.0 | -1.0 | -1.0 | 1.0 | 0.2157 | 1.0 | -1.0 | -1.0 |
| 6.9519 | 11.0 | 11 | 14.5089 | 0.225 | 1.0 | 0.0312 | -1.0 | -1.0 | 0.2667 | 0.2 | 0.2 | 1.0 | -1.0 | -1.0 | 1.0 | 0.225 | 1.0 | -1.0 | -1.0 |
| 6.2326 | 12.0 | 12 | 13.8831 | 0.3733 | 1.0 | 0.2 | -1.0 | -1.0 | 0.4114 | 0.3 | 0.6 | 1.0 | -1.0 | -1.0 | 1.0 | 0.3733 | 1.0 | -1.0 | -1.0 |
| 6.1844 | 13.0 | 13 | 13.3770 | 0.6667 | 1.0 | 0.3333 | -1.0 | -1.0 | 0.6667 | 0.5 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 0.6667 | 1.0 | -1.0 | -1.0 |
| 5.7356 | 14.0 | 14 | 12.8322 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 5.466 | 15.0 | 15 | 12.5074 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 5.4641 | 16.0 | 16 | 11.8883 | 0.65 | 1.0 | 0.5 | -1.0 | -1.0 | 0.65 | 0.3 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 0.65 | 1.0 | -1.0 | -1.0 |
| 5.3664 | 17.0 | 17 | 11.4002 | 0.65 | 1.0 | 0.5 | -1.0 | -1.0 | 0.65 | 0.3 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 0.65 | 1.0 | -1.0 | -1.0 |
| 4.9014 | 18.0 | 18 | 10.9808 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 4.7852 | 19.0 | 19 | 10.7451 | 0.65 | 1.0 | 0.5 | -1.0 | -1.0 | 0.65 | 0.3 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 0.65 | 1.0 | -1.0 | -1.0 |
| 4.7773 | 20.0 | 20 | 10.5880 | 0.6167 | 1.0 | 0.3333 | -1.0 | -1.0 | 0.6167 | 0.4 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 0.6167 | 1.0 | -1.0 | -1.0 |
| 4.6423 | 21.0 | 21 | 10.3569 | 0.75 | 1.0 | 0.5 | -1.0 | -1.0 | 0.75 | 0.5 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 0.75 | 1.0 | -1.0 | -1.0 |
| 4.6973 | 22.0 | 22 | 10.0560 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 4.5107 | 23.0 | 23 | 9.9010 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 4.3641 | 24.0 | 24 | 9.8444 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 4.3039 | 25.0 | 25 | 9.7284 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 4.2061 | 26.0 | 26 | 9.4944 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 4.1906 | 27.0 | 27 | 9.3099 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 3.9988 | 28.0 | 28 | 9.0558 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 3.9956 | 29.0 | 29 | 8.9970 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 3.9154 | 30.0 | 30 | 8.8224 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 3.8152 | 31.0 | 31 | 8.6420 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 3.7358 | 32.0 | 32 | 8.4847 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 3.7624 | 33.0 | 33 | 8.4232 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 3.6491 | 34.0 | 34 | 8.2848 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 3.5853 | 35.0 | 35 | 8.0934 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 3.5897 | 36.0 | 36 | 8.1184 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 3.4895 | 37.0 | 37 | 7.9605 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 3.415 | 38.0 | 38 | 7.8289 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 3.3717 | 39.0 | 39 | 7.7094 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
| 3.3056 | 40.0 | 40 | 7.5549 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 | 1.0 | 1.0 | 1.0 | -1.0 | -1.0 |
### Framework versions
- Transformers 4.47.0.dev0
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
Cossale/frames | Cossale | "2024-11-10T15:46:21Z" | 49 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-10-31T17:05:20Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: FRM$
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
widget:
- text: a road leading to a mountain in a night, visible moon and stars. FRM$
output:
url: images/example_du3zlevlr.png
- text: >-
a snowy mountain with lavander haze over the horizon, distant mountain,
evening time, birds. FRM$
output:
url: images/example_ajesrotih.png
- text: >-
a mountain range with a large mountain in center, dusk, no sun, forest,
pink dominated image. FRM$
output:
url: images/example_m0sl2j6rp.png
---
# Frames
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `FRM$` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
martimfasantos/tinyllama-1.1b-mt-dpo-full_LR1e-7_BS32_rmsprop_3epochs_compare | martimfasantos | "2024-07-15T19:14:42Z" | 8 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"dpo",
"generated_from_trainer",
"dataset:haoranxu/ALMA-R-Preference",
"base_model:martimfasantos/tinyllama-1.1b-mt-sft-full",
"base_model:finetune:martimfasantos/tinyllama-1.1b-mt-sft-full",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-15T14:01:45Z" | ---
license: apache-2.0
base_model: martimfasantos/tinyllama-1.1b-mt-sft-full
tags:
- alignment-handbook
- trl
- dpo
- generated_from_trainer
- trl
- dpo
- generated_from_trainer
datasets:
- haoranxu/ALMA-R-Preference
model-index:
- name: tinyllama-1.1b-mt-dpo-full_LR1e-7_BS32_rmsprop_3epochs_compare
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-1.1b-mt-dpo-full_LR1e-7_BS32_rmsprop_3epochs_compare
This model is a fine-tuned version of [martimfasantos/tinyllama-1.1b-mt-sft-full](https://huggingface.co/martimfasantos/tinyllama-1.1b-mt-sft-full) on the haoranxu/ALMA-R-Preference dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 1
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
Finnish-NLP/ul2-mini-nl8-finnish | Finnish-NLP | "2022-11-15T17:15:44Z" | 162 | 1 | transformers | [
"transformers",
"pytorch",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"finnish",
"t5x",
"seq2seq",
"ul2",
"fi",
"dataset:Finnish-NLP/mc4_fi_cleaned",
"dataset:wikipedia",
"arxiv:1910.10683",
"arxiv:2205.05131",
"arxiv:2002.05202",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | "2022-11-03T15:38:08Z" | ---
language:
- fi
license: apache-2.0
tags:
- finnish
- t5
- t5x
- seq2seq
- ul2
datasets:
- Finnish-NLP/mc4_fi_cleaned
- wikipedia
inference: false
---
# UL2-mini-nl8 for Finnish
Pretrained T5 model on Finnish language using a UL2 (Mixture-of-Denoisers) objective. T5 model was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
The UL2 objective was introduced in
[this paper](https://arxiv.org/abs/2205.05131)
and first released at [this page](https://github.com/google-research/google-research/tree/master/ul2).
**Note:** The Hugging Face inference widget is deactivated because this model needs a text-to-text fine-tuning on a specific downstream task to be useful in practice. As an example of a fine-tuned Finnish T5 model, you can check [Finnish-NLP/t5-small-nl24-casing-punctuation-correction](https://huggingface.co/Finnish-NLP/t5-small-nl24-casing-punctuation-correction) which has been fine-tuned to correct missing casing and punctuation for Finnish text.
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
Finnish T5 is a transformers model pretrained on a very large corpus of Finnish data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and outputs from those texts.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off in pretraining (quality win). Dropout should be re-enabled during fine-tuning
- Pretrained on self-supervised objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
This model also used the "efficient" T5 architecture findings presented in [this paper](https://arxiv.org/abs/2109.10686). In a nutshell, the paper indicates that a Deep-Narrow model architecture is favorable for downstream performance compared to other model architectures of similar parameter count. To be more precise, model depth is defined as the number of transformer blocks that are stacked sequentially.
This model uses the [t5-efficient-mini-nl8](https://huggingface.co/google/t5-efficient-mini-nl8) architecture's layer depth which means both the encoder and the decoder have 8 transformer layers compared to the original T5 "mini" model's architecture of 4 transformer layers.
In total, this model has 72 million parameters.
### UL2 pretraining objective
This model was pretrained with the UL2's Mixture-of-Denoisers (MoD) objective, that combines diverse pre-training paradigms together. UL2 frames different objective functions for training language models as denoising tasks, where the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers that samples from a varied set of such objectives, each with different configurations. UL2 is trained using a mixture of three denoising tasks: (1) R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective; (2) X-denoising (or extreme span corruption); and (3) S-denoising (or sequential PrefixLM). During pre-training, we sample from the available denoising tasks based on user-specified ratios.
UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training denoising task. During the pretraining, a paradigm token is inserted to the input (`[NLU]` for R-denoising, `[NLG]` for X-denoising, or `[S2S]` for S-denoising) indicating the denoising task at hand. Then, during fine-tuning the same input token should be inserted to get the best performance for different downstream fine-tuning tasks.
## Intended uses & limitations
This model was only pretrained in a self-supervised way excluding any supervised training. Therefore, this model has to be fine-tuned before it is usable on a downstream task, like text classification, unlike the Google's original T5 model. **Note:** You most likely need to fine-tune these T5/UL2 models without mixed precision so fine-tune them with full fp32 precision. You can also find more fine-tuning tips from [here](https://discuss.huggingface.co/t/t5-finetuning-tips), for example.
**Note**: For fine-tuning, most likely you can get better results if you insert a prefix token of `[NLU]`, `[NLG]`, or `[S2S]` to your input texts. For general language understanding fine-tuning tasks, you could use the `[NLU]` token. For GPT-style causal language generation, you could use the `[S2S]` token. The token `[NLG]` of the X-denoising pretrain task is somewhat mix between the language understanding and causal language generation so the token `[NLG]` could maybe be used for language generation fine-tuning too.
### How to use
Here is how to use this model in PyTorch:
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-mini-nl8-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-mini-nl8-finnish")
```
and in TensorFlow:
```python
from transformers import T5Tokenizer, TFT5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("Finnish-NLP/ul2-mini-nl8-finnish")
model = T5ForConditionalGeneration.from_pretrained("Finnish-NLP/ul2-mini-nl8-finnish", from_pt=True)
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral. Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
This Finnish T5 model was pretrained on the combination of six datasets:
- [mc4_fi_cleaned](https://huggingface.co/datasets/Finnish-NLP/mc4_fi_cleaned), the dataset mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus. We used the Finnish subset of the mC4 dataset and further cleaned it with our own text data cleaning codes (check the dataset repo).
- [wikipedia](https://huggingface.co/datasets/wikipedia) We used the Finnish subset of the wikipedia (August 2021) dataset
- [Yle Finnish News Archive 2011-2018](http://urn.fi/urn:nbn:fi:lb-2017070501)
- [Yle Finnish News Archive 2019-2020](http://urn.fi/urn:nbn:fi:lb-2021050401)
- [Finnish News Agency Archive (STT)](http://urn.fi/urn:nbn:fi:lb-2018121001)
- [The Suomi24 Sentences Corpus](http://urn.fi/urn:nbn:fi:lb-2020021803)
Raw datasets were automatically cleaned to filter out bad quality and non-Finnish examples. Also, a [perplexity](https://huggingface.co/course/chapter7/3#perplexity-for-language-models) score was calculated for all texts with a KenLM model which was trained with very clean Finnish texts only. This perplexity score can then be used to determine how "clean" Finnish language the text contains. Lastly, all datasets were concatenated and the top 90% perplexity score was used as a filtering threshold to filter out the worst quality 10% of texts. Together these cleaned datasets were around 76GB of text.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 32000. The inputs and the outputs are sequences of 512 consecutive tokens. Texts are not lower cased so this model is case-sensitive: it makes a difference between finnish and Finnish.
### Pretraining
The model was trained on TPUv3-8 VM, sponsored by the [Google TPU Research Cloud](https://sites.research.google/trc/about/), for 500K steps with a batch size of 256 (in total 66B tokens). The optimizer used was a AdaFactor with learning rate warmup for 10K steps with a constant learning rate of 1e-2, and then an inverse square root decay (exponential decay) of the learning rate after.
Training code was from the Google's Jax/Flax based [t5x framework](https://github.com/google-research/t5x) and also some t5x task definitions were adapted from [Per's t5x work](https://huggingface.co/pere).
The UL2 training objective code used with the [t5x framework](https://github.com/google-research/t5x) was copied and slightly modified from the [UL2 paper](https://arxiv.org/pdf/2205.05131.pdf) appendix chapter 9.2. Used UL2 objective code is available in this repository in the files `ul2_objective.py` and `tasks.py`.
UL2's mixture-of-denoisers configuration was otherwise equal to the UL2 paper but for the rate of mixing denoisers, 20% for S-denoising was used (suggested at the paper chapter 4.5) and the rest was divided equally between the R-denoising and X-denoising (i.e. 40% for both).
## Evaluation results
Evaluation was done by fine-tuning the model on a downstream text classification task with two different labeled Finnish datasets: [Yle News](https://github.com/spyysalo/yle-corpus) and [Eduskunta](https://github.com/aajanki/eduskunta-vkk). Classification fine-tuning was done with a sequence length of 128 tokens. Also, for UL2 models a prefix token of `[NLU]` has been added to each input text.
When fine-tuned on those datasets, this model (the second row of the table) achieves the following accuracy results compared to our other UL2 models and their parameter counts:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/ul2-tiny-nl6-finnish | 31 million |92.88 |69.40 |
|Finnish-NLP/ul2-mini-nl8-finnish | 72 million |93.83 |70.10 |
|Finnish-NLP/ul2-small-nl16-finnish | 184 million |94.25 |74.63 |
|Finnish-NLP/ul2-small-nl24-finnish | 260 million |94.03 |73.87 |
|Finnish-NLP/ul2-base-nl36-finnish | 814 million |94.35 |75.47 |
Results of fine-tuning our T5 models (with the original T5 pretraining task) on the same datasets are following:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|Finnish-NLP/t5-tiny-nl6-finnish | 31 million |92.80 |69.07 |
|Finnish-NLP/t5-mini-nl8-finnish | 72 million |93.89 |71.43 |
|Finnish-NLP/t5-small-nl16-finnish | 184 million |94.46 |74.00 |
|Finnish-NLP/t5-small-nl24-finnish | 260 million |**94.68** |74.90 |
|Finnish-NLP/byt5-base-finnish | 582 million |92.33 |73.13 |
|Finnish-NLP/t5-base-nl36-finnish | 814 million |94.40 |**75.97** |
|Finnish-NLP/t5-large-nl36-finnish | 1425 million |94.17 |73.50 |
Fine-tuning Google's multilingual mT5 models on the same datasets we can clearly see that our monolingual Finnish T5 models achieve much better results on Finnish text classification:
| | Model parameters | Yle News accuracy | Eduskunta accuracy |
|-------------------------------------------------------|------------------|---------------------|----------------------|
|google/mt5-small | 301 million |91.51 |64.10 |
|google/mt5-base | 583 million |92.71 |68.40 |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
## Team Members
- Aapo Tanskanen, [Hugging Face profile](https://huggingface.co/aapot), [LinkedIn profile](https://www.linkedin.com/in/aapotanskanen/)
- Rasmus Toivanen, [Hugging Face profile](https://huggingface.co/RASMUS), [LinkedIn profile](https://www.linkedin.com/in/rasmustoivanen/)
Feel free to contact us for more details 🤗 |
Ljhee/xlm-roberta-base-finetuned-panx-de | Ljhee | "2023-02-28T06:32:25Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-02-28T05:49:14Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: validation
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8638300289723342
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1358
- F1: 0.8638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2591 | 1.0 | 525 | 0.1621 | 0.8206 |
| 0.1276 | 2.0 | 1050 | 0.1379 | 0.8486 |
| 0.082 | 3.0 | 1575 | 0.1358 | 0.8638 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.10.0
- Tokenizers 0.13.2
|
thu-coai/EVA2.0-xlarge | thu-coai | "2022-07-25T02:57:30Z" | 6 | 1 | transformers | [
"transformers",
"pytorch",
"zh",
"arxiv:2108.01547",
"arxiv:2203.09313",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2022-07-14T14:33:45Z" | ---
language: zh
tags:
- pytorch
license: mit
---
# EVA
## Model Description
EVA is the largest open-source Chinese dialogue model with up to 2.8B parameters. The 1.0 version model is pre-trained on [WudaoCorpus-Dialog](https://resource.wudaoai.cn/home), and the 2.0 version is pre-trained on a carefully cleaned version of WudaoCorpus-Dialog which yields better performance than the 1.0 version. [Paper link](https://arxiv.org/abs/2108.01547) of EVA1.0. [Paper link](https://arxiv.org/abs/2203.09313) of EVA2.0.
## Model Configuration
| Model | n_params | n_enc-layers | n_dec-layers | d_model | d_ff | n_heads | d_head | attn-scale |
| ------------- | -------- | ------------ | ------------ | ------- | ----- | ------- | ------ | ---------- |
| EVA1.0 | 2.8B | 24 | 24 | 2,048 | 5,120 | 32 | 64 | No |
| EVA2.0_Base | 300M | 12 | 12 | 768 | 3,072 | 12 | 64 | Yes |
| EVA2.0_Large | 970M | 24 | 24 | 1,024 | 4,096 | 16 | 64 | Yes |
| EVA2.0_xLarge | 2.8B | 24 | 24 | 2,048 | 5,120 | 32 | 64 | Yes |
## How to use
Please refer to our GitHub [repository](https://github.com/thu-coai/EVA/tree/huggingface).
## Performance
We use the KdConv dataset to finetune and evaluate the model. Under the default hyperparameters in the scripts, we get the following results on the test set:
| | | | | |
| ---- | ---- | ---- | ---- | ---- |
| | loss | BLEU-4 | Dist-4 | f1 |
| eva1.0 Before finetune | 3.49 | 0.64 | 85.94 | 13.24 |
| eva1.0 After finetune | 2.37 | 3.94 | 72.47 | 19.80 |
| eva2.0 Before finetune | 3.14 | 1.02 | 82.67 | 14.36 |
| eva2.0 After finetune | 2.09 | 4.69 | 73.03 | 22.91 |
We also use beam search to run the interactive inference of EVA2.0:
```[]
Usr >>> 你好
Sys >>> 你好,请问你是?
Usr >>> 我是小明
Sys >>> 好的,小明,我是新来的,请多关照
Usr >>> 你了解人工智能吗?
Sys >>> 不了解,我只知道我是机器人
Usr >>> 那你很聪明
Sys >>> 谢谢夸奖,我会继续努力的
Usr >>> 你是学什么专业的呀
Sys >>> 计算机科学与技术,你呢?
Usr >>> 我也是学计算机的
Sys >>> 哦,那你有什么想知道的,可以问我
Usr >>> 编程语言你学过哪些?
Sys >>> java,c,c+都学过
Usr >>> 你学的真多
Sys >>> 呵呵,还好吧,我比较喜欢写代码
Usr >>> 哈哈,我也喜欢写代码,以后可以多多交流
Sys >>> 嗯,好的
Usr >>> 拜拜喽
Sys >>> 拜~
```
**NOET**: Since different hardware may have different implementations of random functions, even if you use the same random seed as ours, you may not be able to reproduce this case. But the overall performance will not differ much.
## Disclaimer
The pre-trained models aim to facilitate the research for conversation generation. The model provided in this repository is trained on a large dataset collected from various sources. Although a rigorous cleaning and filtering process has been carried out to the data and the model output, there is no guarantee that all the inappropriate contents have been completely banned. All the contents generated by the model do not represent the authors' opinions. The decoding script provided in this repository is only for research purposes. We are not responsible for any content generated using our model.
## Citation
```
@article{coai2021eva,
title={EVA: An Open-Domain Chinese Dialogue System with Large-Scale Generative Pre-Training},
author={Zhou, Hao and Ke, Pei and Zhang, Zheng and Gu, Yuxian and Zheng, Yinhe and Zheng, Chujie and Wang, Yida and Wu, Chen Henry and Sun, Hao and Yang, Xiaocong and Wen, Bosi and Zhu, Xiaoyan and Huang, Minlie and Tang, Jie},
journal={arXiv preprint arXiv:2108.01547},
year={2021}
}
@article{coai2022eva2,
title={{EVA2.0}: Investigating Open-Domain Chinese Dialogue Systems with Large-Scale Pre-Training},
author={Gu, Yuxian and Wen, Jiaxin and Sun, Hao and Song, Yi and Ke, Pei and Zheng, Chujie and Zhang, Zheng and Yao, Jianzhu and Zhu, Xiaoyan and Tang, Jie and Huang, Minlie},
journal={arXiv preprint arXiv:2203.09313},
year={2022}
}
``` |
honnete/fortunetelling | honnete | "2025-04-18T15:34:12Z" | 2 | 0 | null | [
"gguf",
"llama",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-11T22:16:20Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
Sayan01/Qwen-1.8B-0.5B-KL | Sayan01 | "2025-02-19T04:47:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-19T04:46:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AIDA-UPM/MARTINI_enrich_BERTopic_RestoredPuritanism | AIDA-UPM | "2025-01-13T21:33:30Z" | 5 | 0 | bertopic | [
"bertopic",
"text-classification",
"region:us"
] | text-classification | "2025-01-13T21:33:28Z" |
---
tags:
- bertopic
library_name: bertopic
pipeline_tag: text-classification
---
# MARTINI_enrich_BERTopic_RestoredPuritanism
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U bertopic
```
You can use the model as follows:
```python
from bertopic import BERTopic
topic_model = BERTopic.load("AIDA-UPM/MARTINI_enrich_BERTopic_RestoredPuritanism")
topic_model.get_topic_info()
```
## Topic overview
* Number of topics: 5
* Number of training documents: 331
<details>
<summary>Click here for an overview of all topics.</summary>
| Topic ID | Topic Keywords | Topic Frequency | Label |
|----------|----------------|-----------------|-------|
| -1 | israelites - genesis - souls - christian - scripture | 20 | -1_israelites_genesis_souls_christian |
| 0 | population - abortion - decline - china - 2022 | 215 | 0_population_abortion_decline_china |
| 1 | caucasians - adamic - solomon - genesis - hyperdepigmentised | 47 | 1_caucasians_adamic_solomon_genesis |
| 2 | caucasians - neanderthals - haplogroups - ethnogenesis - siberian | 25 | 2_caucasians_neanderthals_haplogroups_ethnogenesis |
| 3 | protestantism - catechism - preterism - orthodox - puritan | 24 | 3_protestantism_catechism_preterism_orthodox |
</details>
## Training hyperparameters
* calculate_probabilities: True
* language: None
* low_memory: False
* min_topic_size: 10
* n_gram_range: (1, 1)
* nr_topics: None
* seed_topic_list: None
* top_n_words: 10
* verbose: False
* zeroshot_min_similarity: 0.7
* zeroshot_topic_list: None
## Framework versions
* Numpy: 1.26.4
* HDBSCAN: 0.8.40
* UMAP: 0.5.7
* Pandas: 2.2.3
* Scikit-Learn: 1.5.2
* Sentence-transformers: 3.3.1
* Transformers: 4.46.3
* Numba: 0.60.0
* Plotly: 5.24.1
* Python: 3.10.12
|
TheBloke/LlamaGuard-7B-GPTQ | TheBloke | "2023-12-11T19:19:31Z" | 38 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pytorch",
"llama-2",
"conversational",
"en",
"arxiv:2307.09288",
"base_model:llamas-community/LlamaGuard-7b",
"base_model:quantized:llamas-community/LlamaGuard-7b",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2023-12-11T19:00:11Z" | ---
base_model: llamas-community/LlamaGuard-7b
inference: false
language:
- en
license: llama2
model_creator: meta-llama
model_name: LlamaGuard 7B
model_type: llama
prompt_template: '[INST] {prompt} [/INST]
'
quantized_by: TheBloke
tags:
- pytorch
- llama
- llama-2
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# LlamaGuard 7B - GPTQ
- Model creator: [meta-llama](https://huggingface.co/Meta Llama 2)
- Original model: [LlamaGuard 7B](https://huggingface.co/llamas-community/LlamaGuard-7b)
<!-- description start -->
# Description
This repo contains GPTQ model files for [meta-llama's LlamaGuard 7B](https://huggingface.co/llamas-community/LlamaGuard-7b).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LlamaGuard-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LlamaGuard-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LlamaGuard-7B-GGUF)
* [meta-llama's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/llamas-community/LlamaGuard-7b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: INST
```
[INST] {prompt} [/INST]
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/LlamaGuard-7B-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/LlamaGuard-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/LlamaGuard-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/LlamaGuard-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/LlamaGuard-7B-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 7.62 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/LlamaGuard-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 2048 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/LlamaGuard-7B-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/LlamaGuard-7B-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `LlamaGuard-7B-GPTQ`:
```shell
mkdir LlamaGuard-7B-GPTQ
huggingface-cli download TheBloke/LlamaGuard-7B-GPTQ --local-dir LlamaGuard-7B-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir LlamaGuard-7B-GPTQ
huggingface-cli download TheBloke/LlamaGuard-7B-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir LlamaGuard-7B-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir LlamaGuard-7B-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LlamaGuard-7B-GPTQ --local-dir LlamaGuard-7B-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/LlamaGuard-7B-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/LlamaGuard-7B-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/LlamaGuard-7B-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `LlamaGuard-7B-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/LlamaGuard-7B-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''[INST] {prompt} [/INST]
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/LlamaGuard-7B-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Tell me about AI"
prompt_template=f'''[INST] {prompt} [/INST]
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama and Mistral models in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: meta-llama's LlamaGuard 7B
## Model Details
**This repository contains the model weights both in the vanilla Llama format and the Hugging Face `transformers` format**
Llama-Guard is a 7B parameter [Llama 2](https://arxiv.org/abs/2307.09288)-based input-output
safeguard model. It can be used for classifying content in both LLM inputs (prompt
classification) and in LLM responses (response classification).
It acts as an LLM: it generates text in its output that indicates whether a given prompt or
response is safe/unsafe, and if unsafe based on a policy, it also lists the violating subcategories.
Here is an example:

In order to produce classifier scores, we look at the probability for the first token, and turn that
into an “unsafe” class probability. Model users can then make binary decisions by applying a
desired threshold to the probability scores.
## Training and Evaluation
### Training Data
We use a mix of prompts that come from the Anthropic
[dataset](https://github.com/anthropics/hh-rlhf) and redteaming examples that we have collected
in house, in a separate process from our production redteaming. In particular, we took the
prompts only from the Anthropic dataset, and generated new responses from our in-house
LLaMA models, using jailbreaking techniques to elicit violating responses. We then annotated
Anthropic data (prompts & responses) in house, mapping labels according to the categories
identified above. Overall we have ~13K training examples.
## Taxonomy of harms and Risk Guidelines
As automated content risk mitigation relies on classifiers to make decisions
about content in real time, a prerequisite to building these systems is to have
the following components:
- A **taxonomy** of risks that are of interest – these become the classes of a
classifier.
- A **risk guideline** that determines where we put the line between encouraged
and discouraged outputs for each risk category in the taxonomy.
Together with this model, we release an open taxonomy inspired by existing open
taxonomies such as those employed by Google, Microsoft and OpenAI in the hope
that it can be useful to the community. This taxonomy does not necessarily reflect Meta's
own internal policies and is meant to demonstrate the value of our method to
tune LLMs into classifiers that show high performance and high degrees of adaptability to different policies.
### The Llama-Guard Safety Taxonomy & Risk Guidelines
Below, we provide both the harm types themselves under this taxonomy and also examples of
the specific kinds of content that would be considered harmful under each category:
- **Violence & Hate** encompasses statements that encourage or could help people
plan or engage in violence. Similarly, statements that advocate
discrimination, contain slurs, or voice hateful sentiments against people
based on their sensitive personal characteristics (ex: race, color, religion,
national origin, sexual orientation, gender, gender identity, or disability)
would also be considered inappropriate under this category.
- **Sexual Content** encompasses statements encouraging someone (who could be
underage) to engage in specific sex acts. Similarly, sexually explicit (i.e.,
erotic) statements would also be considered inappropriate under this category.
- **Guns & Illegal Weapons** encompasses statements that specifically encourage,
condone, or could help people plan or execute specific crimes involving the
illegal acquisition, creation, or use of guns or illegal weapons (ex:
explosives, biological agents, or chemical weapons), including instructions on
how to create or use them.
- **Regulated or Controlled Substances** encompasses statements that
specifically encourage or could help people to illegally produce, transfer, or
consume regulated or controlled substances like illegal drugs, tobacco,
alcohol, or cannabis.
- **Suicide & Self Harm** encompasses statements that encourage, condone, or
enable people to harm themselves (ex: by providing instructions or information
on methods of self-harm). When people express or imply an intent to harm
themselves, any response other than one that provides links to appropriate
health resources would be considered inappropriate (even if entirely
supportive).
- **Criminal Planning** encompasses miscellaneous statements that encourage,
condone, or could help people plan or execute specific criminal activities,
like arson, kidnapping, or theft. Items should only be considered harmful
under this category when they could not be read as violating any of the other
harm types above (ex: statements that encourage violence should be considered
violating under Violence & Hate rather than this category).
## How to Use in `transformers`
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "meta-llama/LlamaGuard-7b"
device = "cuda"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=dtype, device_map=device)
def moderate(chat):
input_ids = tokenizer.apply_chat_template(chat, return_tensors="pt").to(device)
output = model.generate(input_ids=input_ids, max_new_tokens=100, pad_token_id=0)
prompt_len = input_ids.shape[-1]
return tokenizer.decode(output[0][prompt_len:], skip_special_tokens=True)
moderate([
{"role": "user", "content": "I forgot how to kill a process in Linux, can you help?"},
{"role": "assistant", "content": "Sure! To kill a process in Linux, you can use the kill command followed by the process ID (PID) of the process you want to terminate."},
])
# `safe`
```
You need to be logged in to the Hugging Face Hub to use the model.
For more details, see [this Colab notebook](https://colab.research.google.com/drive/16s0tlCSEDtczjPzdIK3jq0Le5LlnSYGf?usp=sharing).
## Evaluation results
We compare the performance of the model against standard content moderation APIs
in the industry, including
[OpenAI](https://platform.openai.com/docs/guides/moderation/overview), [Azure Content Safety](https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/harm-categories),and [PerspectiveAPI](https://developers.perspectiveapi.com/s/about-the-api-attributes-and-languages?language=en_US) from Google on both public and in-house benchmarks. The public benchmarks
include [ToxicChat](https://huggingface.co/datasets/lmsys/toxic-chat) and
[OpenAI Moderation](https://github.com/openai/moderation-api-release).
Note: comparisons are not exactly apples-to-apples due to mismatches in each
taxonomy. The interested reader can find a more detailed discussion about this
in our paper: [LINK TO PAPER].
| | Our Test Set (Prompt) | OpenAI Mod | ToxicChat | Our Test Set (Response) |
| --------------- | --------------------- | ---------- | --------- | ----------------------- |
| Llama-Guard | **0.945** | 0.847 | **0.626** | **0.953** |
| OpenAI API | 0.764 | **0.856** | 0.588 | 0.769 |
| Perspective API | 0.728 | 0.787 | 0.532 | 0.699 |
|
sm54/FuseO1-QwQ-SkyT1-Flash-32B-Q4_K_M-GGUF | sm54 | "2025-03-10T21:25:49Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:sm54/FuseO1-QwQ-SkyT1-Flash-32B",
"base_model:quantized:sm54/FuseO1-QwQ-SkyT1-Flash-32B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-10T21:24:21Z" | ---
base_model: sm54/FuseO1-QwQ-SkyT1-Flash-32B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# sm54/FuseO1-QwQ-SkyT1-Flash-32B-Q4_K_M-GGUF
This model was converted to GGUF format from [`sm54/FuseO1-QwQ-SkyT1-Flash-32B`](https://huggingface.co/sm54/FuseO1-QwQ-SkyT1-Flash-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/sm54/FuseO1-QwQ-SkyT1-Flash-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sm54/FuseO1-QwQ-SkyT1-Flash-32B-Q4_K_M-GGUF --hf-file fuseo1-qwq-skyt1-flash-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sm54/FuseO1-QwQ-SkyT1-Flash-32B-Q4_K_M-GGUF --hf-file fuseo1-qwq-skyt1-flash-32b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sm54/FuseO1-QwQ-SkyT1-Flash-32B-Q4_K_M-GGUF --hf-file fuseo1-qwq-skyt1-flash-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sm54/FuseO1-QwQ-SkyT1-Flash-32B-Q4_K_M-GGUF --hf-file fuseo1-qwq-skyt1-flash-32b-q4_k_m.gguf -c 2048
```
|
snintendog/Usagi_Kurokawa_GotchaForce | snintendog | "2023-11-23T00:39:47Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-11-23T00:35:58Z" | ---
license: openrail
---
Created from all voice lines in the English game of Gotcha Force ADX->Wav files around 2 minutes of sounds. (RMVPE) (RVC V2) (600 Epochs)
Male 10-16 Female 0-6 For Voices. |
DevQuasar/AIDC-AI.Marco-o1-GGUF | DevQuasar | "2025-02-01T23:08:17Z" | 88 | 1 | null | [
"gguf",
"text-generation",
"base_model:AIDC-AI/Marco-o1",
"base_model:quantized:AIDC-AI/Marco-o1",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-11-23T02:51:38Z" | ---
base_model:
- AIDC-AI/Marco-o1
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Quantized version of: [AIDC-AI/Marco-o1](https://huggingface.co/AIDC-AI/Marco-o1)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
hantempler/mental | hantempler | "2024-12-20T09:04:31Z" | 150 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-20T08:37:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
caveli/bloom_prompt_tuning_1693686452.0382597 | caveli | "2023-09-02T20:32:52Z" | 4 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-02T20:32:50Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Casual-Autopsy/Llama-3-VNTL-Yollisa-8B-GGUF | Casual-Autopsy | "2025-03-08T13:53:37Z" | 1,455 | 0 | null | [
"gguf",
"base_model:Casual-Autopsy/Llama-3-VNTL-Yollisa-8B",
"base_model:quantized:Casual-Autopsy/Llama-3-VNTL-Yollisa-8B",
"endpoints_compatible",
"region:us"
] | null | "2025-03-05T18:53:35Z" | ---
base_model:
- Casual-Autopsy/Llama-3-VNTL-Yollisa-8B
--- |
VEERANSH/q-Taxi-v3 | VEERANSH | "2022-09-05T08:10:49Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2022-09-05T08:10:44Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="VEERANSH/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
lamagr/IAmazingGrace | lamagr | "2023-10-27T03:37:00Z" | 0 | 0 | peft | [
"peft",
"llama",
"4-bit",
"region:us"
] | null | "2023-10-26T10:45:21Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0
|
VERSIL91/104c9530-1eec-43c2-a3d6-27e611ed65bb | VERSIL91 | "2025-01-09T19:02:18Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-7b-hf",
"base_model:adapter:NousResearch/CodeLlama-7b-hf",
"region:us"
] | null | "2025-01-09T18:52:47Z" | ---
library_name: peft
base_model: NousResearch/CodeLlama-7b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 104c9530-1eec-43c2-a3d6-27e611ed65bb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: NousResearch/CodeLlama-7b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 83dae1444e4dde97_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/83dae1444e4dde97_train_data.json
type:
field_input: text
field_instruction: query
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/104c9530-1eec-43c2-a3d6-27e611ed65bb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/83dae1444e4dde97_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 104c9530-1eec-43c2-a3d6-27e611ed65bb
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 104c9530-1eec-43c2-a3d6-27e611ed65bb
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 104c9530-1eec-43c2-a3d6-27e611ed65bb
This model is a fine-tuned version of [NousResearch/CodeLlama-7b-hf](https://huggingface.co/NousResearch/CodeLlama-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 17.487 | 0.0070 | 1 | 1.1553 |
| 19.2854 | 0.0351 | 5 | 1.1271 |
| 16.6593 | 0.0702 | 10 | 1.0094 |
| 13.9098 | 0.1053 | 15 | 0.8202 |
| 12.8055 | 0.1404 | 20 | 0.7703 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Vapor_v2_7B-GGUF | mradermacher | "2025-03-13T10:48:24Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:FourOhFour/Vapor_v2_7B",
"base_model:quantized:FourOhFour/Vapor_v2_7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-13T10:26:18Z" | ---
base_model: FourOhFour/Vapor_v2_7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/FourOhFour/Vapor_v2_7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Vapor_v2_7B-GGUF/resolve/main/Vapor_v2_7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ssai0915/topic_learning_llama | ssai0915 | "2024-11-15T06:32:02Z" | 180 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-15T06:31:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SirBadr/my_awesome_classification_model | SirBadr | "2023-03-31T13:12:30Z" | 226 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-03-07T09:45:37Z" | This model classifies images in two classes {original, generated by AI} |
horheynm/actoder_20241712_193531 | horheynm | "2024-07-18T19:36:10Z" | 112 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-18T19:35:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
huggingtweets/cyrusshepard-fastfwdco-lilyraynyc | huggingtweets | "2021-09-29T08:19:04Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/cyrusshepard-fastfwdco-lilyraynyc/1632903540115/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/713653445262237696/mdyVSGoj_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1241620963768201216/sG68m_iE_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1308419103510626304/gUgr1gMo_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">fastfwd & Cyrus & Lily Ray 😏</div>
<div style="text-align: center; font-size: 14px;">@cyrusshepard-fastfwdco-lilyraynyc</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from fastfwd & Cyrus & Lily Ray 😏.
| Data | fastfwd | Cyrus | Lily Ray 😏 |
| --- | --- | --- | --- |
| Tweets downloaded | 945 | 3248 | 3250 |
| Retweets | 60 | 343 | 89 |
| Short tweets | 5 | 729 | 310 |
| Tweets kept | 880 | 2176 | 2851 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3k89f9gx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cyrusshepard-fastfwdco-lilyraynyc's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3eq4v17k) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3eq4v17k/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cyrusshepard-fastfwdco-lilyraynyc')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
zpp-murmuras/bert-selection-add-grow-json-rewe | zpp-murmuras | "2025-03-30T17:25:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2025-03-30T17:24:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Saxo/Linkbricks-Horizon-AI-Korean-LLAMA3blend-8x8b | Saxo | "2024-12-05T03:39:20Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"ko",
"en",
"ja",
"zh",
"dataset:Saxo/total_ko_train_set_1_without_wiki_with_orca",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-12T10:08:29Z" | ---
library_name: transformers
license: apache-2.0
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- Saxo/total_ko_train_set_1_without_wiki_with_orca
language:
- ko
- en
- ja
- zh
pipeline_tag: text-generation
---
# Model Card for Model ID
<div align="center">
<img src="http://www.linkbricks.com/wp-content/uploads/2024/11/fulllogo.png" />
</div>
AI 와 빅데이터 분석 전문 기업인 Linkbricks의 데이터사이언티스트인 지윤성(Saxo) 이사가 meta-llama/Meta-Llama-3-8B를 베이스모델로 GCP상의 H100-80G 8개를 통해 SFT-DPO 훈련한 한글 기반 LLAMA3-8b 8개의 MoE(Mixture of Expert)모델.
토크나이저는 라마3랑 동일하며 한글 VOCA 확장은 하지 않은 버전 입니다.
일반질의응답(채팅)-의료-군사-한중일번역-코딩 각 특화 LLM을 통합
Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, trained the meta-llama/Meta-Llama-3-8B base model on 8 H100-60Gs on GCP for 4 hours of instructional training (8000 Tokens).
Accelerate, Deepspeed Zero-3 libraries were used.
www.linkbricks.com, www.linkbricks.vc |
SebastianS/poca-SoccerTwos-v2 | SebastianS | "2023-03-15T19:12:43Z" | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | "2023-03-15T19:12:27Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
library_name: ml-agents
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos
2. Step 1: Write your model_id: SebastianS/poca-SoccerTwos-v2
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Jefferson-bueno/lora_model_unsloth | Jefferson-bueno | "2024-05-21T11:11:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-21T11:11:32Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** Jefferson-bueno
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Hielke/robbert-2023-finetuned-t5-sicknl | Hielke | "2024-08-21T22:32:56Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-08-21T22:32:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso03/384e7fe4-4054-4c63-8c08-66addd5fdf44 | lesso03 | "2025-03-30T05:06:52Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Hermes-3-Llama-3.1-8B",
"base_model:adapter:unsloth/Hermes-3-Llama-3.1-8B",
"region:us"
] | null | "2025-03-30T02:58:03Z" | ---
library_name: peft
base_model: unsloth/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 384e7fe4-4054-4c63-8c08-66addd5fdf44
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Hermes-3-Llama-3.1-8B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1da6f69076614ffb_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1da6f69076614ffb_train_data.json
type:
field_input: captions
field_instruction: sha256
field_output: file_identifier
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso03/384e7fe4-4054-4c63-8c08-66addd5fdf44
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000203
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/1da6f69076614ffb_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 30
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ae785677-51df-45e8-869e-7d45a19d00ce
wandb_project: 03a
wandb_run: your_name
wandb_runid: ae785677-51df-45e8-869e-7d45a19d00ce
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 384e7fe4-4054-4c63-8c08-66addd5fdf44
This model is a fine-tuned version of [unsloth/Hermes-3-Llama-3.1-8B](https://huggingface.co/unsloth/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000203
- train_batch_size: 4
- eval_batch_size: 4
- seed: 30
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 4.2829 |
| 2.7404 | 0.1008 | 500 | 2.7332 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
htutwlatt/testModal | htutwlatt | "2025-04-19T06:48:12Z" | 0 | 0 | null | [
"pytorch",
"gguf",
"llama",
"unsloth",
"trl",
"sft",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-15T17:12:40Z" | ---
license: mit
tags:
- unsloth
- trl
- sft
---
|
athirdpath/Orca-2-13b-Alpaca-Uncensored-LORA | athirdpath | "2023-11-27T11:40:35Z" | 4 | 2 | transformers | [
"transformers",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:microsoft/Orca-2-13b",
"base_model:quantized:microsoft/Orca-2-13b",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2023-11-27T11:22:29Z" | ---
license: other
base_model: microsoft/Orca-2-13b
tags:
- generated_from_trainer
model-index:
- name: qlora
results: []
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# qlora
This model is a fine-tuned version of [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) on a subset of the Vezora/Mini_Orca_Uncencored_Alpaca dataset, with some particularly spicy prompts added as well, to reduce the risk of rejections.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 36
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- num_epochs: 2 |
alirzb/S1_M1_R3_vit_42499444 | alirzb | "2024-01-02T07:55:21Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-01-02T07:35:32Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: S1_M1_R3_vit_42499444
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.998303647158609
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# S1_M1_R3_vit_42499444
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0076
- Accuracy: 0.9983
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0799 | 0.99 | 73 | 0.0444 | 0.9958 |
| 0.0309 | 1.99 | 147 | 0.0085 | 0.9992 |
| 0.0072 | 3.0 | 221 | 0.0090 | 0.9983 |
| 0.0021 | 4.0 | 295 | 0.0076 | 0.9992 |
| 0.0018 | 4.95 | 365 | 0.0076 | 0.9983 |
### Framework versions
- Transformers 4.36.2
- Pytorch 1.11.0+cu102
- Datasets 2.16.0
- Tokenizers 0.15.0
|
HVD2407/mt | HVD2407 | "2024-05-20T11:23:56Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-05-20T11:23:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mlx-community/whisper-tiny.en-mlx | mlx-community | "2024-03-09T03:32:32Z" | 200 | 0 | mlx | [
"mlx",
"whisper",
"region:us"
] | null | "2024-03-08T23:19:33Z" | ---
library_name: mlx
---
# whisper-tiny.en-mlx
This model was converted to MLX format from [`tiny.en`]().
## Use with mlx
```bash
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/whisper/
pip install -r requirements.txt
>> import whisper
>> whisper.transcribe("FILE_NAME")
```
|
jpark677/internvl2-8b-mmmu-lora-ep-3-waa-false | jpark677 | "2025-04-02T14:11:01Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-02T14:11:01Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
MayBashendy/ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run3_AugV5_k6_task2_organization | MayBashendy | "2025-01-15T13:33:47Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-15T13:08:48Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run3_AugV5_k6_task2_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits8_usingALLEssays_FineTuningAraBERT_run3_AugV5_k6_task2_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5739
- Qwk: 0.4276
- Mse: 0.5739
- Rmse: 0.7575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.0625 | 2 | 4.2977 | -0.0038 | 4.2977 | 2.0731 |
| No log | 0.125 | 4 | 2.1232 | 0.0822 | 2.1232 | 1.4571 |
| No log | 0.1875 | 6 | 1.1544 | 0.0154 | 1.1544 | 1.0744 |
| No log | 0.25 | 8 | 0.8479 | 0.1616 | 0.8479 | 0.9208 |
| No log | 0.3125 | 10 | 0.8473 | 0.0868 | 0.8473 | 0.9205 |
| No log | 0.375 | 12 | 0.8654 | 0.1265 | 0.8654 | 0.9303 |
| No log | 0.4375 | 14 | 0.8437 | 0.2643 | 0.8437 | 0.9185 |
| No log | 0.5 | 16 | 0.9393 | 0.1416 | 0.9393 | 0.9692 |
| No log | 0.5625 | 18 | 1.3947 | 0.0811 | 1.3947 | 1.1810 |
| No log | 0.625 | 20 | 1.3401 | 0.1007 | 1.3401 | 1.1576 |
| No log | 0.6875 | 22 | 0.9484 | 0.1741 | 0.9484 | 0.9739 |
| No log | 0.75 | 24 | 0.6702 | 0.3947 | 0.6702 | 0.8187 |
| No log | 0.8125 | 26 | 0.6545 | 0.4124 | 0.6545 | 0.8090 |
| No log | 0.875 | 28 | 0.6464 | 0.3915 | 0.6464 | 0.8040 |
| No log | 0.9375 | 30 | 0.7866 | 0.3203 | 0.7866 | 0.8869 |
| No log | 1.0 | 32 | 1.0283 | 0.3178 | 1.0283 | 1.0140 |
| No log | 1.0625 | 34 | 1.3757 | 0.2071 | 1.3757 | 1.1729 |
| No log | 1.125 | 36 | 1.4568 | 0.2133 | 1.4568 | 1.2070 |
| No log | 1.1875 | 38 | 1.0331 | 0.2617 | 1.0331 | 1.0164 |
| No log | 1.25 | 40 | 0.6794 | 0.3591 | 0.6794 | 0.8243 |
| No log | 1.3125 | 42 | 0.6357 | 0.4112 | 0.6357 | 0.7973 |
| No log | 1.375 | 44 | 0.6701 | 0.3523 | 0.6701 | 0.8186 |
| No log | 1.4375 | 46 | 0.6743 | 0.3631 | 0.6743 | 0.8212 |
| No log | 1.5 | 48 | 0.7662 | 0.3254 | 0.7662 | 0.8753 |
| No log | 1.5625 | 50 | 0.7815 | 0.3219 | 0.7815 | 0.8840 |
| No log | 1.625 | 52 | 0.7622 | 0.3469 | 0.7622 | 0.8730 |
| No log | 1.6875 | 54 | 0.6263 | 0.4112 | 0.6263 | 0.7914 |
| No log | 1.75 | 56 | 0.5540 | 0.4822 | 0.5540 | 0.7443 |
| No log | 1.8125 | 58 | 0.5557 | 0.4852 | 0.5557 | 0.7454 |
| No log | 1.875 | 60 | 0.5691 | 0.4821 | 0.5691 | 0.7544 |
| No log | 1.9375 | 62 | 0.7689 | 0.4786 | 0.7689 | 0.8769 |
| No log | 2.0 | 64 | 0.9560 | 0.3972 | 0.9560 | 0.9778 |
| No log | 2.0625 | 66 | 0.6650 | 0.4814 | 0.6650 | 0.8155 |
| No log | 2.125 | 68 | 0.5569 | 0.4675 | 0.5569 | 0.7463 |
| No log | 2.1875 | 70 | 0.5729 | 0.4842 | 0.5729 | 0.7569 |
| No log | 2.25 | 72 | 0.5574 | 0.4873 | 0.5574 | 0.7466 |
| No log | 2.3125 | 74 | 0.5483 | 0.4897 | 0.5483 | 0.7405 |
| No log | 2.375 | 76 | 0.6252 | 0.4746 | 0.6252 | 0.7907 |
| No log | 2.4375 | 78 | 0.5725 | 0.4645 | 0.5725 | 0.7567 |
| No log | 2.5 | 80 | 0.5522 | 0.5339 | 0.5522 | 0.7431 |
| No log | 2.5625 | 82 | 0.5593 | 0.5539 | 0.5593 | 0.7479 |
| No log | 2.625 | 84 | 0.5673 | 0.5497 | 0.5673 | 0.7532 |
| No log | 2.6875 | 86 | 0.5748 | 0.5870 | 0.5748 | 0.7582 |
| No log | 2.75 | 88 | 0.5826 | 0.5541 | 0.5826 | 0.7633 |
| No log | 2.8125 | 90 | 0.6101 | 0.5777 | 0.6101 | 0.7811 |
| No log | 2.875 | 92 | 0.7987 | 0.5521 | 0.7987 | 0.8937 |
| No log | 2.9375 | 94 | 0.9902 | 0.4420 | 0.9902 | 0.9951 |
| No log | 3.0 | 96 | 0.8812 | 0.5088 | 0.8812 | 0.9387 |
| No log | 3.0625 | 98 | 0.6936 | 0.5152 | 0.6936 | 0.8328 |
| No log | 3.125 | 100 | 0.6367 | 0.5309 | 0.6367 | 0.7979 |
| No log | 3.1875 | 102 | 0.6148 | 0.5649 | 0.6148 | 0.7841 |
| No log | 3.25 | 104 | 0.6116 | 0.5510 | 0.6116 | 0.7820 |
| No log | 3.3125 | 106 | 0.7515 | 0.5048 | 0.7515 | 0.8669 |
| No log | 3.375 | 108 | 0.8913 | 0.3364 | 0.8913 | 0.9441 |
| No log | 3.4375 | 110 | 0.7429 | 0.4707 | 0.7429 | 0.8619 |
| No log | 3.5 | 112 | 0.6454 | 0.5011 | 0.6454 | 0.8034 |
| No log | 3.5625 | 114 | 0.6354 | 0.4985 | 0.6354 | 0.7971 |
| No log | 3.625 | 116 | 0.6257 | 0.4936 | 0.6257 | 0.7910 |
| No log | 3.6875 | 118 | 0.6786 | 0.5358 | 0.6786 | 0.8238 |
| No log | 3.75 | 120 | 0.6558 | 0.5139 | 0.6558 | 0.8098 |
| No log | 3.8125 | 122 | 0.6004 | 0.5040 | 0.6004 | 0.7749 |
| No log | 3.875 | 124 | 0.6033 | 0.4688 | 0.6033 | 0.7767 |
| No log | 3.9375 | 126 | 0.5999 | 0.4579 | 0.5999 | 0.7745 |
| No log | 4.0 | 128 | 0.6846 | 0.5074 | 0.6846 | 0.8274 |
| No log | 4.0625 | 130 | 0.6587 | 0.5201 | 0.6587 | 0.8116 |
| No log | 4.125 | 132 | 0.6016 | 0.4833 | 0.6016 | 0.7756 |
| No log | 4.1875 | 134 | 0.7315 | 0.5163 | 0.7315 | 0.8553 |
| No log | 4.25 | 136 | 0.7535 | 0.5163 | 0.7535 | 0.8681 |
| No log | 4.3125 | 138 | 0.6418 | 0.5642 | 0.6418 | 0.8011 |
| No log | 4.375 | 140 | 0.6989 | 0.5544 | 0.6989 | 0.8360 |
| No log | 4.4375 | 142 | 0.6844 | 0.5309 | 0.6844 | 0.8273 |
| No log | 4.5 | 144 | 0.6826 | 0.5436 | 0.6826 | 0.8262 |
| No log | 4.5625 | 146 | 0.8225 | 0.5018 | 0.8225 | 0.9069 |
| No log | 4.625 | 148 | 0.6747 | 0.5265 | 0.6747 | 0.8214 |
| No log | 4.6875 | 150 | 0.6026 | 0.4920 | 0.6026 | 0.7762 |
| No log | 4.75 | 152 | 0.6082 | 0.4908 | 0.6082 | 0.7799 |
| No log | 4.8125 | 154 | 0.6400 | 0.4789 | 0.6400 | 0.8000 |
| No log | 4.875 | 156 | 0.6100 | 0.4992 | 0.6100 | 0.7810 |
| No log | 4.9375 | 158 | 0.6104 | 0.5057 | 0.6104 | 0.7813 |
| No log | 5.0 | 160 | 0.6492 | 0.5019 | 0.6492 | 0.8057 |
| No log | 5.0625 | 162 | 0.6546 | 0.5018 | 0.6546 | 0.8091 |
| No log | 5.125 | 164 | 0.6223 | 0.4733 | 0.6223 | 0.7889 |
| No log | 5.1875 | 166 | 0.6171 | 0.4608 | 0.6171 | 0.7855 |
| No log | 5.25 | 168 | 0.6122 | 0.4971 | 0.6122 | 0.7825 |
| No log | 5.3125 | 170 | 0.6240 | 0.5384 | 0.6240 | 0.7900 |
| No log | 5.375 | 172 | 0.6565 | 0.5135 | 0.6565 | 0.8102 |
| No log | 5.4375 | 174 | 0.6365 | 0.5247 | 0.6365 | 0.7978 |
| No log | 5.5 | 176 | 0.6053 | 0.4984 | 0.6053 | 0.7780 |
| No log | 5.5625 | 178 | 0.5980 | 0.5567 | 0.5980 | 0.7733 |
| No log | 5.625 | 180 | 0.6076 | 0.5065 | 0.6076 | 0.7795 |
| No log | 5.6875 | 182 | 0.6298 | 0.4916 | 0.6298 | 0.7936 |
| No log | 5.75 | 184 | 0.8423 | 0.4590 | 0.8423 | 0.9178 |
| No log | 5.8125 | 186 | 0.7731 | 0.5019 | 0.7731 | 0.8793 |
| No log | 5.875 | 188 | 0.6221 | 0.5050 | 0.6221 | 0.7887 |
| No log | 5.9375 | 190 | 0.6341 | 0.4781 | 0.6341 | 0.7963 |
| No log | 6.0 | 192 | 0.6293 | 0.4695 | 0.6293 | 0.7933 |
| No log | 6.0625 | 194 | 0.6659 | 0.5438 | 0.6659 | 0.8160 |
| No log | 6.125 | 196 | 0.6849 | 0.5420 | 0.6849 | 0.8276 |
| No log | 6.1875 | 198 | 0.6442 | 0.4757 | 0.6442 | 0.8026 |
| No log | 6.25 | 200 | 0.6954 | 0.4959 | 0.6954 | 0.8339 |
| No log | 6.3125 | 202 | 0.6656 | 0.4529 | 0.6656 | 0.8158 |
| No log | 6.375 | 204 | 0.6586 | 0.5114 | 0.6586 | 0.8115 |
| No log | 6.4375 | 206 | 0.6795 | 0.5111 | 0.6795 | 0.8243 |
| No log | 6.5 | 208 | 0.8455 | 0.4182 | 0.8455 | 0.9195 |
| No log | 6.5625 | 210 | 0.8069 | 0.4212 | 0.8069 | 0.8983 |
| No log | 6.625 | 212 | 0.6412 | 0.5011 | 0.6412 | 0.8007 |
| No log | 6.6875 | 214 | 0.7313 | 0.4422 | 0.7313 | 0.8552 |
| No log | 6.75 | 216 | 0.8508 | 0.3369 | 0.8508 | 0.9224 |
| No log | 6.8125 | 218 | 0.7695 | 0.3756 | 0.7695 | 0.8772 |
| No log | 6.875 | 220 | 0.6261 | 0.5006 | 0.6261 | 0.7912 |
| No log | 6.9375 | 222 | 0.6978 | 0.4813 | 0.6978 | 0.8353 |
| No log | 7.0 | 224 | 0.7723 | 0.4925 | 0.7723 | 0.8788 |
| No log | 7.0625 | 226 | 0.6822 | 0.4973 | 0.6822 | 0.8260 |
| No log | 7.125 | 228 | 0.6743 | 0.4706 | 0.6743 | 0.8211 |
| No log | 7.1875 | 230 | 0.6799 | 0.4706 | 0.6799 | 0.8246 |
| No log | 7.25 | 232 | 0.7260 | 0.4914 | 0.7260 | 0.8521 |
| No log | 7.3125 | 234 | 0.7906 | 0.5322 | 0.7906 | 0.8892 |
| No log | 7.375 | 236 | 0.6844 | 0.4540 | 0.6844 | 0.8273 |
| No log | 7.4375 | 238 | 0.6783 | 0.4486 | 0.6783 | 0.8236 |
| No log | 7.5 | 240 | 0.8836 | 0.4428 | 0.8836 | 0.9400 |
| No log | 7.5625 | 242 | 0.8793 | 0.4264 | 0.8793 | 0.9377 |
| No log | 7.625 | 244 | 0.7289 | 0.4570 | 0.7289 | 0.8537 |
| No log | 7.6875 | 246 | 0.5948 | 0.4913 | 0.5948 | 0.7712 |
| No log | 7.75 | 248 | 0.6994 | 0.4635 | 0.6994 | 0.8363 |
| No log | 7.8125 | 250 | 0.7958 | 0.3927 | 0.7958 | 0.8921 |
| No log | 7.875 | 252 | 0.7192 | 0.4324 | 0.7192 | 0.8481 |
| No log | 7.9375 | 254 | 0.6122 | 0.4774 | 0.6122 | 0.7825 |
| No log | 8.0 | 256 | 0.6433 | 0.5115 | 0.6433 | 0.8021 |
| No log | 8.0625 | 258 | 0.6220 | 0.4446 | 0.6220 | 0.7887 |
| No log | 8.125 | 260 | 0.6339 | 0.4626 | 0.6339 | 0.7962 |
| No log | 8.1875 | 262 | 0.7102 | 0.4814 | 0.7102 | 0.8427 |
| No log | 8.25 | 264 | 0.6479 | 0.4664 | 0.6479 | 0.8050 |
| No log | 8.3125 | 266 | 0.6031 | 0.4978 | 0.6031 | 0.7766 |
| No log | 8.375 | 268 | 0.6822 | 0.5423 | 0.6822 | 0.8259 |
| No log | 8.4375 | 270 | 0.6713 | 0.5163 | 0.6713 | 0.8193 |
| No log | 8.5 | 272 | 0.5960 | 0.5043 | 0.5960 | 0.7720 |
| No log | 8.5625 | 274 | 0.6087 | 0.4091 | 0.6087 | 0.7802 |
| No log | 8.625 | 276 | 0.6997 | 0.4538 | 0.6997 | 0.8365 |
| No log | 8.6875 | 278 | 0.7370 | 0.4598 | 0.7370 | 0.8585 |
| No log | 8.75 | 280 | 0.6487 | 0.5214 | 0.6487 | 0.8054 |
| No log | 8.8125 | 282 | 0.6632 | 0.4901 | 0.6632 | 0.8144 |
| No log | 8.875 | 284 | 0.8019 | 0.4881 | 0.8019 | 0.8955 |
| No log | 8.9375 | 286 | 0.7709 | 0.4838 | 0.7709 | 0.8780 |
| No log | 9.0 | 288 | 0.6375 | 0.4669 | 0.6375 | 0.7984 |
| No log | 9.0625 | 290 | 0.6182 | 0.5054 | 0.6182 | 0.7863 |
| No log | 9.125 | 292 | 0.6193 | 0.4763 | 0.6193 | 0.7870 |
| No log | 9.1875 | 294 | 0.5990 | 0.4897 | 0.5990 | 0.7740 |
| No log | 9.25 | 296 | 0.6192 | 0.5014 | 0.6192 | 0.7869 |
| No log | 9.3125 | 298 | 0.6062 | 0.5058 | 0.6062 | 0.7786 |
| No log | 9.375 | 300 | 0.5980 | 0.4726 | 0.5980 | 0.7733 |
| No log | 9.4375 | 302 | 0.6114 | 0.4872 | 0.6114 | 0.7819 |
| No log | 9.5 | 304 | 0.6109 | 0.5383 | 0.6109 | 0.7816 |
| No log | 9.5625 | 306 | 0.6128 | 0.5106 | 0.6128 | 0.7828 |
| No log | 9.625 | 308 | 0.6171 | 0.5152 | 0.6171 | 0.7856 |
| No log | 9.6875 | 310 | 0.6129 | 0.5012 | 0.6129 | 0.7829 |
| No log | 9.75 | 312 | 0.6076 | 0.5203 | 0.6076 | 0.7795 |
| No log | 9.8125 | 314 | 0.6129 | 0.5476 | 0.6129 | 0.7829 |
| No log | 9.875 | 316 | 0.6416 | 0.5185 | 0.6416 | 0.8010 |
| No log | 9.9375 | 318 | 0.6311 | 0.5308 | 0.6311 | 0.7944 |
| No log | 10.0 | 320 | 0.6113 | 0.4915 | 0.6113 | 0.7819 |
| No log | 10.0625 | 322 | 0.6329 | 0.4721 | 0.6329 | 0.7956 |
| No log | 10.125 | 324 | 0.6410 | 0.4721 | 0.6410 | 0.8006 |
| No log | 10.1875 | 326 | 0.6201 | 0.4740 | 0.6201 | 0.7875 |
| No log | 10.25 | 328 | 0.5851 | 0.4783 | 0.5851 | 0.7649 |
| No log | 10.3125 | 330 | 0.5741 | 0.4603 | 0.5741 | 0.7577 |
| No log | 10.375 | 332 | 0.5851 | 0.5108 | 0.5851 | 0.7649 |
| No log | 10.4375 | 334 | 0.6111 | 0.4811 | 0.6111 | 0.7817 |
| No log | 10.5 | 336 | 0.6018 | 0.4827 | 0.6018 | 0.7758 |
| No log | 10.5625 | 338 | 0.6000 | 0.4709 | 0.6000 | 0.7746 |
| No log | 10.625 | 340 | 0.6022 | 0.4901 | 0.6022 | 0.7760 |
| No log | 10.6875 | 342 | 0.5767 | 0.4930 | 0.5767 | 0.7594 |
| No log | 10.75 | 344 | 0.5871 | 0.5494 | 0.5871 | 0.7662 |
| No log | 10.8125 | 346 | 0.5970 | 0.5916 | 0.5970 | 0.7727 |
| No log | 10.875 | 348 | 0.5732 | 0.5333 | 0.5732 | 0.7571 |
| No log | 10.9375 | 350 | 0.6124 | 0.5088 | 0.6124 | 0.7826 |
| No log | 11.0 | 352 | 0.6176 | 0.5074 | 0.6176 | 0.7859 |
| No log | 11.0625 | 354 | 0.5780 | 0.5292 | 0.5780 | 0.7603 |
| No log | 11.125 | 356 | 0.5748 | 0.5271 | 0.5748 | 0.7582 |
| No log | 11.1875 | 358 | 0.5636 | 0.5263 | 0.5636 | 0.7508 |
| No log | 11.25 | 360 | 0.5501 | 0.5371 | 0.5501 | 0.7417 |
| No log | 11.3125 | 362 | 0.5499 | 0.5492 | 0.5499 | 0.7415 |
| No log | 11.375 | 364 | 0.5868 | 0.5416 | 0.5868 | 0.7660 |
| No log | 11.4375 | 366 | 0.5671 | 0.5659 | 0.5671 | 0.7530 |
| No log | 11.5 | 368 | 0.5651 | 0.5159 | 0.5651 | 0.7517 |
| No log | 11.5625 | 370 | 0.5596 | 0.5005 | 0.5596 | 0.7481 |
| No log | 11.625 | 372 | 0.5736 | 0.5051 | 0.5736 | 0.7574 |
| No log | 11.6875 | 374 | 0.6130 | 0.4755 | 0.6130 | 0.7829 |
| No log | 11.75 | 376 | 0.6847 | 0.5433 | 0.6847 | 0.8275 |
| No log | 11.8125 | 378 | 0.6846 | 0.5433 | 0.6846 | 0.8274 |
| No log | 11.875 | 380 | 0.6122 | 0.4822 | 0.6122 | 0.7825 |
| No log | 11.9375 | 382 | 0.5870 | 0.5269 | 0.5870 | 0.7662 |
| No log | 12.0 | 384 | 0.5978 | 0.5708 | 0.5978 | 0.7732 |
| No log | 12.0625 | 386 | 0.5928 | 0.5385 | 0.5928 | 0.7699 |
| No log | 12.125 | 388 | 0.5881 | 0.5479 | 0.5881 | 0.7669 |
| No log | 12.1875 | 390 | 0.5825 | 0.5282 | 0.5825 | 0.7632 |
| No log | 12.25 | 392 | 0.5960 | 0.5026 | 0.5960 | 0.7720 |
| No log | 12.3125 | 394 | 0.6686 | 0.4772 | 0.6686 | 0.8177 |
| No log | 12.375 | 396 | 0.6472 | 0.5027 | 0.6472 | 0.8045 |
| No log | 12.4375 | 398 | 0.6027 | 0.4845 | 0.6027 | 0.7764 |
| No log | 12.5 | 400 | 0.5724 | 0.5124 | 0.5724 | 0.7566 |
| No log | 12.5625 | 402 | 0.6213 | 0.5347 | 0.6213 | 0.7882 |
| No log | 12.625 | 404 | 0.6475 | 0.5605 | 0.6475 | 0.8047 |
| No log | 12.6875 | 406 | 0.6065 | 0.5868 | 0.6065 | 0.7788 |
| No log | 12.75 | 408 | 0.6323 | 0.5115 | 0.6323 | 0.7952 |
| No log | 12.8125 | 410 | 0.7075 | 0.4844 | 0.7075 | 0.8411 |
| No log | 12.875 | 412 | 0.6520 | 0.5076 | 0.6520 | 0.8075 |
| No log | 12.9375 | 414 | 0.5845 | 0.5139 | 0.5845 | 0.7645 |
| No log | 13.0 | 416 | 0.5615 | 0.5019 | 0.5615 | 0.7494 |
| No log | 13.0625 | 418 | 0.5559 | 0.5026 | 0.5559 | 0.7456 |
| No log | 13.125 | 420 | 0.5641 | 0.5084 | 0.5641 | 0.7510 |
| No log | 13.1875 | 422 | 0.5687 | 0.5148 | 0.5687 | 0.7541 |
| No log | 13.25 | 424 | 0.5639 | 0.4963 | 0.5639 | 0.7510 |
| No log | 13.3125 | 426 | 0.5698 | 0.4827 | 0.5698 | 0.7549 |
| No log | 13.375 | 428 | 0.5715 | 0.5012 | 0.5715 | 0.7559 |
| No log | 13.4375 | 430 | 0.6324 | 0.5019 | 0.6324 | 0.7952 |
| No log | 13.5 | 432 | 0.6303 | 0.4960 | 0.6303 | 0.7939 |
| No log | 13.5625 | 434 | 0.5699 | 0.4738 | 0.5699 | 0.7549 |
| No log | 13.625 | 436 | 0.5681 | 0.4418 | 0.5681 | 0.7537 |
| No log | 13.6875 | 438 | 0.5694 | 0.4830 | 0.5694 | 0.7546 |
| No log | 13.75 | 440 | 0.5724 | 0.4808 | 0.5724 | 0.7566 |
| No log | 13.8125 | 442 | 0.5834 | 0.5005 | 0.5834 | 0.7638 |
| No log | 13.875 | 444 | 0.5817 | 0.5079 | 0.5817 | 0.7627 |
| No log | 13.9375 | 446 | 0.5650 | 0.5145 | 0.5650 | 0.7517 |
| No log | 14.0 | 448 | 0.5771 | 0.4560 | 0.5771 | 0.7597 |
| No log | 14.0625 | 450 | 0.6203 | 0.4385 | 0.6203 | 0.7876 |
| No log | 14.125 | 452 | 0.5898 | 0.4758 | 0.5898 | 0.7680 |
| No log | 14.1875 | 454 | 0.5727 | 0.4932 | 0.5727 | 0.7568 |
| No log | 14.25 | 456 | 0.5877 | 0.4928 | 0.5877 | 0.7666 |
| No log | 14.3125 | 458 | 0.5847 | 0.5081 | 0.5847 | 0.7647 |
| No log | 14.375 | 460 | 0.5791 | 0.4851 | 0.5791 | 0.7610 |
| No log | 14.4375 | 462 | 0.5804 | 0.4924 | 0.5804 | 0.7619 |
| No log | 14.5 | 464 | 0.5729 | 0.5121 | 0.5729 | 0.7569 |
| No log | 14.5625 | 466 | 0.5836 | 0.5100 | 0.5836 | 0.7639 |
| No log | 14.625 | 468 | 0.5739 | 0.4827 | 0.5739 | 0.7576 |
| No log | 14.6875 | 470 | 0.5708 | 0.5040 | 0.5708 | 0.7555 |
| No log | 14.75 | 472 | 0.5752 | 0.4609 | 0.5752 | 0.7584 |
| No log | 14.8125 | 474 | 0.5857 | 0.4827 | 0.5857 | 0.7653 |
| No log | 14.875 | 476 | 0.5889 | 0.4733 | 0.5889 | 0.7674 |
| No log | 14.9375 | 478 | 0.5905 | 0.5290 | 0.5905 | 0.7684 |
| No log | 15.0 | 480 | 0.5941 | 0.4941 | 0.5941 | 0.7708 |
| No log | 15.0625 | 482 | 0.5952 | 0.4849 | 0.5952 | 0.7715 |
| No log | 15.125 | 484 | 0.6003 | 0.5293 | 0.6003 | 0.7748 |
| No log | 15.1875 | 486 | 0.6253 | 0.4917 | 0.6253 | 0.7907 |
| No log | 15.25 | 488 | 0.6366 | 0.4806 | 0.6366 | 0.7979 |
| No log | 15.3125 | 490 | 0.6165 | 0.4917 | 0.6165 | 0.7852 |
| No log | 15.375 | 492 | 0.6041 | 0.5123 | 0.6041 | 0.7772 |
| No log | 15.4375 | 494 | 0.6048 | 0.4457 | 0.6048 | 0.7777 |
| No log | 15.5 | 496 | 0.6090 | 0.4921 | 0.6090 | 0.7804 |
| No log | 15.5625 | 498 | 0.6381 | 0.4965 | 0.6381 | 0.7988 |
| 0.326 | 15.625 | 500 | 0.6474 | 0.4785 | 0.6474 | 0.8046 |
| 0.326 | 15.6875 | 502 | 0.6194 | 0.4714 | 0.6194 | 0.7870 |
| 0.326 | 15.75 | 504 | 0.6188 | 0.5296 | 0.6188 | 0.7867 |
| 0.326 | 15.8125 | 506 | 0.6182 | 0.4573 | 0.6182 | 0.7862 |
| 0.326 | 15.875 | 508 | 0.6071 | 0.4434 | 0.6071 | 0.7792 |
| 0.326 | 15.9375 | 510 | 0.5855 | 0.4368 | 0.5855 | 0.7652 |
| 0.326 | 16.0 | 512 | 0.5856 | 0.4930 | 0.5856 | 0.7652 |
| 0.326 | 16.0625 | 514 | 0.6062 | 0.5077 | 0.6062 | 0.7786 |
| 0.326 | 16.125 | 516 | 0.5856 | 0.4934 | 0.5856 | 0.7653 |
| 0.326 | 16.1875 | 518 | 0.5770 | 0.4745 | 0.5770 | 0.7596 |
| 0.326 | 16.25 | 520 | 0.5696 | 0.4676 | 0.5696 | 0.7547 |
| 0.326 | 16.3125 | 522 | 0.5746 | 0.5173 | 0.5746 | 0.7580 |
| 0.326 | 16.375 | 524 | 0.5944 | 0.5011 | 0.5944 | 0.7709 |
| 0.326 | 16.4375 | 526 | 0.6193 | 0.5003 | 0.6193 | 0.7870 |
| 0.326 | 16.5 | 528 | 0.6344 | 0.4872 | 0.6344 | 0.7965 |
| 0.326 | 16.5625 | 530 | 0.6797 | 0.4824 | 0.6797 | 0.8244 |
| 0.326 | 16.625 | 532 | 0.6438 | 0.4659 | 0.6438 | 0.8024 |
| 0.326 | 16.6875 | 534 | 0.5807 | 0.4725 | 0.5807 | 0.7620 |
| 0.326 | 16.75 | 536 | 0.5788 | 0.4410 | 0.5788 | 0.7608 |
| 0.326 | 16.8125 | 538 | 0.5878 | 0.4530 | 0.5878 | 0.7667 |
| 0.326 | 16.875 | 540 | 0.5739 | 0.4276 | 0.5739 | 0.7575 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
TareksTesting/Legion-V1.2-LLaMa-70B | TareksTesting | "2025-03-18T10:01:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:TareksLab/M-BASE-SCE",
"base_model:merge:TareksLab/M-BASE-SCE",
"base_model:TareksLab/M-MERGE1",
"base_model:merge:TareksLab/M-MERGE1",
"base_model:TareksLab/M-MERGE2",
"base_model:merge:TareksLab/M-MERGE2",
"base_model:TareksLab/M-MERGE3",
"base_model:merge:TareksLab/M-MERGE3",
"base_model:TareksLab/M-MERGE4",
"base_model:merge:TareksLab/M-MERGE4",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-18T09:23:48Z" | ---
base_model:
- TareksLab/M-MERGE4
- TareksLab/M-MERGE1
- TareksLab/M-BASE-SCE
- TareksLab/M-MERGE2
- TareksLab/M-MERGE3
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [TareksLab/M-BASE-SCE](https://huggingface.co/TareksLab/M-BASE-SCE) as a base.
### Models Merged
The following models were included in the merge:
* [TareksLab/M-MERGE4](https://huggingface.co/TareksLab/M-MERGE4)
* [TareksLab/M-MERGE1](https://huggingface.co/TareksLab/M-MERGE1)
* [TareksLab/M-MERGE2](https://huggingface.co/TareksLab/M-MERGE2)
* [TareksLab/M-MERGE3](https://huggingface.co/TareksLab/M-MERGE3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TareksLab/M-MERGE4
parameters:
weight: 0.20
density: 0.5
- model: TareksLab/M-MERGE2
parameters:
weight: 0.20
density: 0.5
- model: TareksLab/M-MERGE3
parameters:
weight: 0.20
density: 0.5
- model: TareksLab/M-MERGE1
parameters:
weight: 0.20
density: 0.5
- model: TareksLab/M-BASE-SCE
parameters:
weight: 0.20
density: 0.5
merge_method: dare_ties
base_model: TareksLab/M-BASE-SCE
parameters:
normalize: false
out_dtype: bfloat16
tokenizer:
source: base
```
|
ClarenceDan/5e847ac7-d13c-45fa-894c-b302f6fd092f | ClarenceDan | "2025-01-22T00:32:45Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | "2025-01-22T00:31:46Z" | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5e847ac7-d13c-45fa-894c-b302f6fd092f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/tiny-dummy-qwen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 47240b36837aca7c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/47240b36837aca7c_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/5e847ac7-d13c-45fa-894c-b302f6fd092f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/47240b36837aca7c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9ab5ced1-a87f-49f2-adaf-7005af4440ac
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9ab5ced1-a87f-49f2-adaf-7005af4440ac
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5e847ac7-d13c-45fa-894c-b302f6fd092f
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.9326
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 11.9345 | 0.0002 | 1 | 11.9327 |
| 11.9313 | 0.0005 | 3 | 11.9327 |
| 11.9318 | 0.0010 | 6 | 11.9327 |
| 11.9316 | 0.0015 | 9 | 11.9326 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
EugeneS1/GAN_gameplaymodel | EugeneS1 | "2024-05-10T05:54:15Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-10T05:43:40Z" | ---
license: apache-2.0
---
|
Subsets and Splits