modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-20 06:26:59
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 429
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-20 06:26:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF | mradermacher | "2025-04-08T09:33:56Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:WaveCut/Fallen_Amoral_Gemma3-12B",
"base_model:quantized:WaveCut/Fallen_Amoral_Gemma3-12B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-04-08T08:51:01Z" | ---
base_model: WaveCut/Fallen_Amoral_Gemma3-12B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/WaveCut/Fallen_Amoral_Gemma3-12B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-Q4_1.gguf) | i1-Q4_1 | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.5 | |
| [GGUF](https://huggingface.co/mradermacher/Fallen_Amoral_Gemma3-12B-i1-GGUF/resolve/main/Fallen_Amoral_Gemma3-12B.i1-Q6_K.gguf) | i1-Q6_K | 9.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/KoQwen_72B_v5.0-i1-GGUF | mradermacher | "2025-01-16T08:25:08Z" | 484 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"ko",
"en",
"base_model:spow12/KoQwen_72B_v5.0",
"base_model:quantized:spow12/KoQwen_72B_v5.0",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-01-16T03:23:48Z" | ---
base_model: spow12/KoQwen_72B_v5.0
language:
- ko
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/spow12/KoQwen_72B_v5.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/KoQwen_72B_v5.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-Q2_K_S.gguf) | i1-Q2_K_S | 29.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-Q4_1.gguf) | i1-Q4_1 | 45.8 | |
| [GGUF](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/KoQwen_72B_v5.0-i1-GGUF/resolve/main/KoQwen_72B_v5.0.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
cunghoctienganh/5e7adf04-9165-4819-aa73-bfae084adb0b | cunghoctienganh | "2025-01-19T07:28:46Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"base_model:adapter:tokyotech-llm/Llama-3-Swallow-8B-v0.1",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-19T07:10:09Z" | ---
library_name: peft
license: llama3
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5e7adf04-9165-4819-aa73-bfae084adb0b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- d73dc80d17fcf4ce_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/d73dc80d17fcf4ce_train_data.json
type:
field_instruction: sentence
field_output: tagged_output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: cunghoctienganh/5e7adf04-9165-4819-aa73-bfae084adb0b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/d73dc80d17fcf4ce_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1b69fe9c-5f20-4739-b9aa-6fdcf670d605
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1b69fe9c-5f20-4739-b9aa-6fdcf670d605
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5e7adf04-9165-4819-aa73-bfae084adb0b
This model is a fine-tuned version of [tokyotech-llm/Llama-3-Swallow-8B-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1257 | 0.1762 | 200 | 0.1043 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tornwhisp/vampi3 | tornwhisp | "2025-03-17T22:45:23Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-03-17T22:45:01Z" | ---
license: apache-2.0
---
|
sail-rvc/Marilia_Mendonca | sail-rvc | "2023-07-14T07:27:42Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:27:11Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Marilia_Mendonca
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:27:42
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
abc88767/model91 | abc88767 | "2024-05-07T12:29:44Z" | 129 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-07T12:28:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
morturr/Mistral-7B-v0.1-PAIR_yelp_reviews_dadjokes-COMB-yelp_reviews-comb-3-seed-18-2025-02-11 | morturr | "2025-02-11T17:36:24Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2025-02-11T17:36:09Z" | ---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-PAIR_yelp_reviews_dadjokes-COMB-yelp_reviews-comb-3-seed-18-2025-02-11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-PAIR_yelp_reviews_dadjokes-COMB-yelp_reviews-comb-3-seed-18-2025-02-11
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 150
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1 |
shibajustfor/5705a0e4-8db6-4a30-b11d-428658efabf6 | shibajustfor | "2025-02-02T18:17:35Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:lmsys/vicuna-13b-v1.5",
"base_model:adapter:lmsys/vicuna-13b-v1.5",
"license:llama2",
"region:us"
] | null | "2025-02-02T18:10:47Z" | ---
library_name: peft
license: llama2
base_model: lmsys/vicuna-13b-v1.5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5705a0e4-8db6-4a30-b11d-428658efabf6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lmsys/vicuna-13b-v1.5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 0ef748d31bc606d1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/0ef748d31bc606d1_train_data.json
type:
field_instruction: scenario
field_output: description
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: shibajustfor/5705a0e4-8db6-4a30-b11d-428658efabf6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: constant
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/0ef748d31bc606d1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1fb622ba-68d2-442a-8e77-2a5302cdf36a
wandb_project: Birthday-SN56-38-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1fb622ba-68d2-442a-8e77-2a5302cdf36a
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5705a0e4-8db6-4a30-b11d-428658efabf6
This model is a fine-tuned version of [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 1.2978 |
| 0.8927 | 0.0268 | 50 | 0.9102 |
| 0.9061 | 0.0535 | 100 | 0.8843 |
| 0.8239 | 0.0803 | 150 | 0.8728 |
| 0.8597 | 0.1070 | 200 | 0.8620 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
RichardErkhov/SepKeyPro_-_SuperQwen2-7B-Chat-8bits | RichardErkhov | "2025-03-26T19:21:45Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-03-26T18:58:36Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
SuperQwen2-7B-Chat - bnb 8bits
- Model creator: https://huggingface.co/SepKeyPro/
- Original model: https://huggingface.co/SepKeyPro/SuperQwen2-7B-Chat/
Original model description:
---
tags:
- merge
- mergekit
---
# MergedQwen2-7B-Chat
MergedQwen2-7B-Chat is a merge of the following models using Mergekit:
## Configuration
```yaml
models:
- model: Qwen/Qwen2-7B-Instruct
- model: natong19/Qwen2-7B-Instruct-abliterated
- model: cognitivecomputations/dolphin-2.9.2-qwen2-7b
merge_method: model_stock
base_model: Qwen/Qwen2-7B-Instruct
dtype: bfloat16
```
|
vikash06/Hallucination-model-True-dataset | vikash06 | "2024-05-13T03:59:20Z" | 108 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:2204.04991",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-13T03:44:03Z" | ---
library_name: transformers
license: mit
metrics:
- f1
- accuracy
pipeline_tag: text-classification
---
# Model Card for Model ID
The model detects hallucination and outputs NLI metrics. It has been trained on:
TRUE Dataset(93k samples) - 0.91 F1 score
## Model Details
Crossencoder model which has been trained on TRUE dataset to detect hallucination focussed on summarization.
Natural Language Inference (NLI) involves deciding if a "hypothesis" is logically supported by a "premise."
Simply put, it's about figuring out if a given statement (the hypothesis) is true based on another statement (the premise)
that serves as your sole information about the topic.
## Uses

## Bias, Risks, and Limitations
You can use this to finetune for specific tasks but using directly on intense financial or medical based documents is not recommended.
## How to Get Started with the Model
Use the code below to get started with the model.
model = AutoModelForSequenceClassification.from_pretrained('vikash06/Hallucination-model-True-dataset')
tokenizer = AutoTokenizer.from_pretrained('vikash06/Hallucination-model-True-dataset')
inputs = tokenizer.batch_encode_plus(pairs, return_tensors='pt', padding=True, truncation=True)
pairs = [["Colin Kaepernick . Kaepernick began his professional career as a backup to Alex Smith , but became the 49ers ' starter in the middle of the 2012 season after Smith suffered a concussion . He remained the team 's starting quarterback for the rest of the season and went on to lead the 49ers to their first Super Bowl appearance since 1994 , losing to the Baltimore Ravens .",
'Colin Kaepernick became a starting quarterback during the 49ers 63rd season in the National Football League.' ],
["Soul Food is a 1997 American comedy-drama film produced by Kenneth `` Babyface '' Edmonds , Tracey Edmonds and Robert Teitel and released by Fox 2000 Pictures .",
'Fox 2000 Pictures released the film Soul Food.']]
inputs = inputs.to("cuda:0")
model.eval()
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits # ensure your model outputs logits directly
scores = 1 / (1 + np.exp(-logits.cpu().detach().numpy())).flatten()
The scores lie between 0-1 where 1 represents no hallucination and 0 represents hallucination.
### Training Data
TRUE Dataset all 93k samples: https://arxiv.org/pdf/2204.04991
|
nttx/af8deca1-f67c-4803-b127-f3a9bee2174f | nttx | "2025-01-27T08:18:36Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-27T08:08:11Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: af8deca1-f67c-4803-b127-f3a9bee2174f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 790edca1cc0ca732_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/790edca1cc0ca732_train_data.json
type:
field_input: choices
field_instruction: question
field_output: context
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/af8deca1-f67c-4803-b127-f3a9bee2174f
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 4
mlflow_experiment_name: /tmp/790edca1cc0ca732_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7009dc63-3020-4c81-a0dc-918c206c887e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7009dc63-3020-4c81-a0dc-918c206c887e
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# af8deca1-f67c-4803-b127-f3a9bee2174f
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5626
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.4959 | 0.7641 | 200 | 2.5626 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kyo/distilbert-base-uncased-finetuned-imdb | kyo | "2021-12-09T15:29:34Z" | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.707 | 1.0 | 157 | 2.4883 |
| 2.572 | 2.0 | 314 | 2.4240 |
| 2.5377 | 3.0 | 471 | 2.4355 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
kwwww/bert-base-uncased-test_16_5345 | kwwww | "2023-10-17T02:44:09Z" | 0 | 0 | null | [
"pytorch",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"region:us"
] | null | "2023-10-16T16:23:54Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
- accuracy
model-index:
- name: bert-base-uncased-test_16_5345
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-test_16_5345
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8214
- F1: {'f1': 0.8846905537459283}
- Accuracy: {'accuracy': 0.8302972195589645}
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy |
|:-------------:|:------:|:------:|:---------------:|:--------------------------:|:--------------------------------:|
| No log | 1.0 | 335 | 0.5804 | {'f1': 0.817717206132879} | {'accuracy': 0.6922339405560882} |
| 0.5859 | 2.0 | 670 | 0.4981 | {'f1': 0.8567870485678706} | {'accuracy': 0.7794822627037392} |
| 0.4977 | 3.0 | 1005 | 0.4943 | {'f1': 0.8609022556390977} | {'accuracy': 0.7871524448705657} |
| 0.4977 | 4.0 | 1340 | 0.4540 | {'f1': 0.8674388674388674} | {'accuracy': 0.8024928092042186} |
| 0.4561 | 5.0 | 1675 | 0.4592 | {'f1': 0.8654217643271088} | {'accuracy': 0.7996164908916586} |
| 0.4227 | 6.0 | 2010 | 0.4749 | {'f1': 0.8677316293929712} | {'accuracy': 0.8015340364333653} |
| 0.4227 | 7.0 | 2345 | 0.4778 | {'f1': 0.8720560152768938} | {'accuracy': 0.8072866730584851} |
| 0.3912 | 8.0 | 2680 | 0.4874 | {'f1': 0.8702770780856423} | {'accuracy': 0.8024928092042186} |
| 0.3832 | 9.0 | 3015 | 0.4636 | {'f1': 0.8751592356687897} | {'accuracy': 0.8120805369127517} |
| 0.3832 | 10.0 | 3350 | 0.4715 | {'f1': 0.873490146217419} | {'accuracy': 0.8092042186001918} |
| 0.3633 | 11.0 | 3685 | 0.4801 | {'f1': 0.8746835443037975} | {'accuracy': 0.8101629913710451} |
| 0.3534 | 12.0 | 4020 | 0.4806 | {'f1': 0.878017789072427} | {'accuracy': 0.8159156279961649} |
| 0.3534 | 13.0 | 4355 | 0.4538 | {'f1': 0.8782051282051283} | {'accuracy': 0.8178331735378715} |
| 0.3402 | 14.0 | 4690 | 0.5009 | {'f1': 0.8788265306122449} | {'accuracy': 0.8178331735378715} |
| 0.3127 | 15.0 | 5025 | 0.4424 | {'f1': 0.878334417696812} | {'accuracy': 0.8207094918504314} |
| 0.3127 | 16.0 | 5360 | 0.5587 | {'f1': 0.8708074534161491} | {'accuracy': 0.800575263662512} |
| 0.3066 | 17.0 | 5695 | 0.4430 | {'f1': 0.8855656697009102} | {'accuracy': 0.8312559923298178} |
| 0.3058 | 18.0 | 6030 | 0.4962 | {'f1': 0.8796414852752881} | {'accuracy': 0.8197507190795782} |
| 0.3058 | 19.0 | 6365 | 0.4779 | {'f1': 0.8851174934725848} | {'accuracy': 0.8312559923298178} |
| 0.2785 | 20.0 | 6700 | 0.5235 | {'f1': 0.8757097791798107} | {'accuracy': 0.8111217641418984} |
| 0.276 | 21.0 | 7035 | 0.4725 | {'f1': 0.8796895213454075} | {'accuracy': 0.8216682646212847} |
| 0.276 | 22.0 | 7370 | 0.4551 | {'f1': 0.8844884488448844} | {'accuracy': 0.8322147651006712} |
| 0.2638 | 23.0 | 7705 | 0.5246 | {'f1': 0.8878865979381444} | {'accuracy': 0.8331735378715245} |
| 0.256 | 24.0 | 8040 | 0.4898 | {'f1': 0.881859264041317} | {'accuracy': 0.8245445829338447} |
| 0.256 | 25.0 | 8375 | 0.5608 | {'f1': 0.87934301958307} | {'accuracy': 0.8168744007670182} |
| 0.2501 | 26.0 | 8710 | 0.5682 | {'f1': 0.8833967046894803} | {'accuracy': 0.8235858101629914} |
| 0.2409 | 27.0 | 9045 | 0.5014 | {'f1': 0.8786049631120054} | {'accuracy': 0.8264621284755513} |
| 0.2409 | 28.0 | 9380 | 0.6560 | {'f1': 0.8690254500310366} | {'accuracy': 0.7976989453499521} |
| 0.2274 | 29.0 | 9715 | 0.5556 | {'f1': 0.8865845755022683} | {'accuracy': 0.8322147651006712} |
| 0.2234 | 30.0 | 10050 | 0.5468 | {'f1': 0.8839285714285714} | {'accuracy': 0.825503355704698} |
| 0.2234 | 31.0 | 10385 | 0.5590 | {'f1': 0.8809368900455433} | {'accuracy': 0.8245445829338447} |
| 0.2053 | 32.0 | 10720 | 0.6394 | {'f1': 0.8862047043865225} | {'accuracy': 0.8283796740172579} |
| 0.2051 | 33.0 | 11055 | 0.6475 | {'f1': 0.8800505050505051} | {'accuracy': 0.8178331735378715} |
| 0.2051 | 34.0 | 11390 | 0.5637 | {'f1': 0.8817345597897503} | {'accuracy': 0.8274209012464045} |
| 0.2003 | 35.0 | 11725 | 0.6592 | {'f1': 0.8809675366008912} | {'accuracy': 0.8207094918504314} |
| 0.1821 | 36.0 | 12060 | 0.6021 | {'f1': 0.8776699029126214} | {'accuracy': 0.8187919463087249} |
| 0.1821 | 37.0 | 12395 | 0.6652 | {'f1': 0.8798988621997472} | {'accuracy': 0.8178331735378715} |
| 0.1794 | 38.0 | 12730 | 0.6079 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.1695 | 39.0 | 13065 | 0.6287 | {'f1': 0.884318766066838} | {'accuracy': 0.8274209012464045} |
| 0.1695 | 40.0 | 13400 | 0.6532 | {'f1': 0.8763897972531065} | {'accuracy': 0.8187919463087249} |
| 0.1709 | 41.0 | 13735 | 0.6844 | {'f1': 0.8783958602846055} | {'accuracy': 0.8197507190795782} |
| 0.1664 | 42.0 | 14070 | 0.7902 | {'f1': 0.8758664146187776} | {'accuracy': 0.8111217641418984} |
| 0.1664 | 43.0 | 14405 | 0.7809 | {'f1': 0.8750784682988072} | {'accuracy': 0.8092042186001918} |
| 0.1658 | 44.0 | 14740 | 0.7418 | {'f1': 0.8796414852752881} | {'accuracy': 0.8197507190795782} |
| 0.1626 | 45.0 | 15075 | 0.7835 | {'f1': 0.8780795957043588} | {'accuracy': 0.8149568552253116} |
| 0.1626 | 46.0 | 15410 | 0.7832 | {'f1': 0.8727272727272727} | {'accuracy': 0.8053691275167785} |
| 0.1568 | 47.0 | 15745 | 0.8162 | {'f1': 0.8786346396965866} | {'accuracy': 0.8159156279961649} |
| 0.1435 | 48.0 | 16080 | 0.7792 | {'f1': 0.8787301587301588} | {'accuracy': 0.8168744007670182} |
| 0.1435 | 49.0 | 16415 | 0.7505 | {'f1': 0.8819308545335943} | {'accuracy': 0.8264621284755513} |
| 0.1422 | 50.0 | 16750 | 0.7400 | {'f1': 0.8799480856586632} | {'accuracy': 0.822627037392138} |
| 0.1449 | 51.0 | 17085 | 0.8237 | {'f1': 0.876117496807152} | {'accuracy': 0.8139980824544583} |
| 0.1449 | 52.0 | 17420 | 0.7569 | {'f1': 0.8796844181459565} | {'accuracy': 0.8245445829338447} |
| 0.1312 | 53.0 | 17755 | 0.7673 | {'f1': 0.874251497005988} | {'accuracy': 0.8187919463087249} |
| 0.1376 | 54.0 | 18090 | 1.0797 | {'f1': 0.8718905472636816} | {'accuracy': 0.8024928092042186} |
| 0.1376 | 55.0 | 18425 | 0.8770 | {'f1': 0.8766233766233766} | {'accuracy': 0.8178331735378715} |
| 0.1205 | 56.0 | 18760 | 0.8752 | {'f1': 0.877642536835362} | {'accuracy': 0.8168744007670182} |
| 0.1277 | 57.0 | 19095 | 0.9854 | {'f1': 0.8763474952441344} | {'accuracy': 0.8130393096836049} |
| 0.1277 | 58.0 | 19430 | 0.8095 | {'f1': 0.8778576094056172} | {'accuracy': 0.8207094918504314} |
| 0.1274 | 59.0 | 19765 | 0.8494 | {'f1': 0.8825806451612904} | {'accuracy': 0.825503355704698} |
| 0.1161 | 60.0 | 20100 | 0.8805 | {'f1': 0.8752399232245682} | {'accuracy': 0.8130393096836049} |
| 0.1161 | 61.0 | 20435 | 0.8964 | {'f1': 0.87539531941809} | {'accuracy': 0.8111217641418984} |
| 0.1181 | 62.0 | 20770 | 0.8991 | {'f1': 0.8792207792207791} | {'accuracy': 0.8216682646212847} |
| 0.1039 | 63.0 | 21105 | 0.8565 | {'f1': 0.8776699029126214} | {'accuracy': 0.8187919463087249} |
| 0.1039 | 64.0 | 21440 | 0.8743 | {'f1': 0.8820512820512821} | {'accuracy': 0.8235858101629914} |
| 0.0966 | 65.0 | 21775 | 0.9298 | {'f1': 0.8802547770700637} | {'accuracy': 0.8197507190795782} |
| 0.1063 | 66.0 | 22110 | 0.9383 | {'f1': 0.8784565916398713} | {'accuracy': 0.8187919463087249} |
| 0.1063 | 67.0 | 22445 | 0.9079 | {'f1': 0.8772609819121446} | {'accuracy': 0.8178331735378715} |
| 0.0979 | 68.0 | 22780 | 0.9178 | {'f1': 0.8773523685918235} | {'accuracy': 0.8187919463087249} |
| 0.1029 | 69.0 | 23115 | 0.8791 | {'f1': 0.8752449379490528} | {'accuracy': 0.8168744007670182} |
| 0.1029 | 70.0 | 23450 | 0.8901 | {'f1': 0.8822381262199089} | {'accuracy': 0.8264621284755513} |
| 0.1026 | 71.0 | 23785 | 1.1163 | {'f1': 0.8715083798882682} | {'accuracy': 0.8015340364333653} |
| 0.0932 | 72.0 | 24120 | 1.0499 | {'f1': 0.8768161718256475} | {'accuracy': 0.8130393096836049} |
| 0.0932 | 73.0 | 24455 | 0.9786 | {'f1': 0.8854766474728086} | {'accuracy': 0.8283796740172579} |
| 0.0958 | 74.0 | 24790 | 0.9004 | {'f1': 0.8851828094932649} | {'accuracy': 0.8283796740172579} |
| 0.0897 | 75.0 | 25125 | 0.9874 | {'f1': 0.8834513844172568} | {'accuracy': 0.8264621284755513} |
| 0.0897 | 76.0 | 25460 | 0.9677 | {'f1': 0.8837209302325582} | {'accuracy': 0.8274209012464045} |
| 0.0933 | 77.0 | 25795 | 0.9813 | {'f1': 0.879177377892031} | {'accuracy': 0.8197507190795782} |
| 0.0938 | 78.0 | 26130 | 1.0899 | {'f1': 0.8772813089993707} | {'accuracy': 0.8130393096836049} |
| 0.0938 | 79.0 | 26465 | 0.9780 | {'f1': 0.8809831824062095} | {'accuracy': 0.8235858101629914} |
| 0.0849 | 80.0 | 26800 | 1.0851 | {'f1': 0.8781725888324874} | {'accuracy': 0.8159156279961649} |
| 0.0829 | 81.0 | 27135 | 0.9472 | {'f1': 0.8757396449704142} | {'accuracy': 0.8187919463087249} |
| 0.0829 | 82.0 | 27470 | 1.0028 | {'f1': 0.8785759694850604} | {'accuracy': 0.8168744007670182} |
| 0.0864 | 83.0 | 27805 | 0.9873 | {'f1': 0.8827674567584881} | {'accuracy': 0.8245445829338447} |
| 0.0787 | 84.0 | 28140 | 0.8788 | {'f1': 0.877454299255247} | {'accuracy': 0.8264621284755513} |
| 0.0787 | 85.0 | 28475 | 1.2001 | {'f1': 0.8715710723192021} | {'accuracy': 0.8024928092042186} |
| 0.0835 | 86.0 | 28810 | 0.9610 | {'f1': 0.8850129198966409} | {'accuracy': 0.8293384467881112} |
| 0.0721 | 87.0 | 29145 | 1.0461 | {'f1': 0.8811369509043928} | {'accuracy': 0.8235858101629914} |
| 0.0721 | 88.0 | 29480 | 1.0201 | {'f1': 0.8842921784098255} | {'accuracy': 0.8283796740172579} |
| 0.0745 | 89.0 | 29815 | 1.0696 | {'f1': 0.8803582853486884} | {'accuracy': 0.8207094918504314} |
| 0.0748 | 90.0 | 30150 | 1.0277 | {'f1': 0.8787483702737942} | {'accuracy': 0.8216682646212847} |
| 0.0748 | 91.0 | 30485 | 1.2406 | {'f1': 0.8743718592964824} | {'accuracy': 0.8082454458293384} |
| 0.0671 | 92.0 | 30820 | 1.0602 | {'f1': 0.8766066838046273} | {'accuracy': 0.8159156279961649} |
| 0.0668 | 93.0 | 31155 | 1.0274 | {'f1': 0.8751625487646294} | {'accuracy': 0.8159156279961649} |
| 0.0668 | 94.0 | 31490 | 1.0361 | {'f1': 0.8823911630929175} | {'accuracy': 0.8264621284755513} |
| 0.069 | 95.0 | 31825 | 1.0801 | {'f1': 0.8785166240409207} | {'accuracy': 0.8178331735378715} |
| 0.0667 | 96.0 | 32160 | 1.0670 | {'f1': 0.8834513844172568} | {'accuracy': 0.8264621284755513} |
| 0.0667 | 97.0 | 32495 | 1.2035 | {'f1': 0.8811630847029078} | {'accuracy': 0.8197507190795782} |
| 0.0631 | 98.0 | 32830 | 1.0649 | {'f1': 0.8853299167200512} | {'accuracy': 0.8283796740172579} |
| 0.0749 | 99.0 | 33165 | 1.0805 | {'f1': 0.8837508028259474} | {'accuracy': 0.8264621284755513} |
| 0.0598 | 100.0 | 33500 | 1.1548 | {'f1': 0.8844415752098128} | {'accuracy': 0.8283796740172579} |
| 0.0598 | 101.0 | 33835 | 1.1832 | {'f1': 0.8785166240409207} | {'accuracy': 0.8178331735378715} |
| 0.0596 | 102.0 | 34170 | 1.1726 | {'f1': 0.8778625954198473} | {'accuracy': 0.8159156279961649} |
| 0.0612 | 103.0 | 34505 | 1.1259 | {'f1': 0.8823151125401929} | {'accuracy': 0.8245445829338447} |
| 0.0612 | 104.0 | 34840 | 1.0810 | {'f1': 0.8857327307940606} | {'accuracy': 0.8302972195589645} |
| 0.0672 | 105.0 | 35175 | 1.1677 | {'f1': 0.8810741687979541} | {'accuracy': 0.8216682646212847} |
| 0.0624 | 106.0 | 35510 | 1.1728 | {'f1': 0.8794871794871796} | {'accuracy': 0.8197507190795782} |
| 0.0624 | 107.0 | 35845 | 1.1968 | {'f1': 0.8854766474728086} | {'accuracy': 0.8283796740172579} |
| 0.0597 | 108.0 | 36180 | 1.1626 | {'f1': 0.8841970569417786} | {'accuracy': 0.8264621284755513} |
| 0.0627 | 109.0 | 36515 | 1.1067 | {'f1': 0.8870129870129869} | {'accuracy': 0.8331735378715245} |
| 0.0627 | 110.0 | 36850 | 1.1216 | {'f1': 0.8854368932038835} | {'accuracy': 0.8302972195589645} |
| 0.0497 | 111.0 | 37185 | 1.1710 | {'f1': 0.8803088803088803} | {'accuracy': 0.8216682646212847} |
| 0.0551 | 112.0 | 37520 | 1.1882 | {'f1': 0.8802547770700637} | {'accuracy': 0.8197507190795782} |
| 0.0551 | 113.0 | 37855 | 1.1644 | {'f1': 0.8815533980582524} | {'accuracy': 0.8245445829338447} |
| 0.0597 | 114.0 | 38190 | 1.0852 | {'f1': 0.8863049095607234} | {'accuracy': 0.8312559923298178} |
| 0.0444 | 115.0 | 38525 | 1.2478 | {'f1': 0.8772151898734178} | {'accuracy': 0.8139980824544583} |
| 0.0444 | 116.0 | 38860 | 1.1119 | {'f1': 0.8817480719794344} | {'accuracy': 0.8235858101629914} |
| 0.053 | 117.0 | 39195 | 1.2588 | {'f1': 0.8709880427942103} | {'accuracy': 0.8034515819750719} |
| 0.0519 | 118.0 | 39530 | 1.1433 | {'f1': 0.8805681084570691} | {'accuracy': 0.822627037392138} |
| 0.0519 | 119.0 | 39865 | 1.1480 | {'f1': 0.8833010960670536} | {'accuracy': 0.8264621284755513} |
| 0.0567 | 120.0 | 40200 | 1.2303 | {'f1': 0.8850574712643677} | {'accuracy': 0.8274209012464045} |
| 0.0478 | 121.0 | 40535 | 1.3070 | {'f1': 0.8784213876511776} | {'accuracy': 0.8168744007670182} |
| 0.0478 | 122.0 | 40870 | 1.2091 | {'f1': 0.8850129198966409} | {'accuracy': 0.8293384467881112} |
| 0.0454 | 123.0 | 41205 | 1.2617 | {'f1': 0.8790786948176584} | {'accuracy': 0.8187919463087249} |
| 0.052 | 124.0 | 41540 | 1.1830 | {'f1': 0.8839922229423202} | {'accuracy': 0.8283796740172579} |
| 0.052 | 125.0 | 41875 | 1.1763 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.0414 | 126.0 | 42210 | 1.0855 | {'f1': 0.888888888888889} | {'accuracy': 0.8389261744966443} |
| 0.0436 | 127.0 | 42545 | 1.0968 | {'f1': 0.8835078534031414} | {'accuracy': 0.8293384467881112} |
| 0.0436 | 128.0 | 42880 | 1.2527 | {'f1': 0.8784565916398713} | {'accuracy': 0.8187919463087249} |
| 0.0446 | 129.0 | 43215 | 1.1601 | {'f1': 0.8798972382787411} | {'accuracy': 0.8207094918504314} |
| 0.049 | 130.0 | 43550 | 1.1404 | {'f1': 0.8865710560625816} | {'accuracy': 0.8331735378715245} |
| 0.049 | 131.0 | 43885 | 1.1282 | {'f1': 0.8817766165904638} | {'accuracy': 0.8264621284755513} |
| 0.0423 | 132.0 | 44220 | 1.1854 | {'f1': 0.8804627249357326} | {'accuracy': 0.8216682646212847} |
| 0.0425 | 133.0 | 44555 | 1.1647 | {'f1': 0.8792207792207791} | {'accuracy': 0.8216682646212847} |
| 0.0425 | 134.0 | 44890 | 1.2484 | {'f1': 0.8813341885824246} | {'accuracy': 0.822627037392138} |
| 0.0461 | 135.0 | 45225 | 1.2505 | {'f1': 0.8812903225806452} | {'accuracy': 0.8235858101629914} |
| 0.0342 | 136.0 | 45560 | 1.1941 | {'f1': 0.8825438027255029} | {'accuracy': 0.8264621284755513} |
| 0.0342 | 137.0 | 45895 | 1.2203 | {'f1': 0.8823151125401929} | {'accuracy': 0.8245445829338447} |
| 0.0421 | 138.0 | 46230 | 1.1508 | {'f1': 0.8780804150453955} | {'accuracy': 0.8197507190795782} |
| 0.0415 | 139.0 | 46565 | 1.2784 | {'f1': 0.8766946417043254} | {'accuracy': 0.8168744007670182} |
| 0.0415 | 140.0 | 46900 | 1.2567 | {'f1': 0.8815958815958816} | {'accuracy': 0.8235858101629914} |
| 0.045 | 141.0 | 47235 | 1.1752 | {'f1': 0.8795336787564768} | {'accuracy': 0.8216682646212847} |
| 0.041 | 142.0 | 47570 | 1.3445 | {'f1': 0.8794326241134752} | {'accuracy': 0.8207094918504314} |
| 0.041 | 143.0 | 47905 | 1.2438 | {'f1': 0.8804627249357326} | {'accuracy': 0.8216682646212847} |
| 0.0464 | 144.0 | 48240 | 1.1829 | {'f1': 0.8864229765013055} | {'accuracy': 0.8331735378715245} |
| 0.0384 | 145.0 | 48575 | 1.2665 | {'f1': 0.8837209302325582} | {'accuracy': 0.8274209012464045} |
| 0.0384 | 146.0 | 48910 | 1.1613 | {'f1': 0.8884488448844885} | {'accuracy': 0.837967401725791} |
| 0.0356 | 147.0 | 49245 | 1.3262 | {'f1': 0.8798972382787411} | {'accuracy': 0.8207094918504314} |
| 0.037 | 148.0 | 49580 | 1.2797 | {'f1': 0.8797427652733119} | {'accuracy': 0.8207094918504314} |
| 0.037 | 149.0 | 49915 | 1.3262 | {'f1': 0.8757170172084129} | {'accuracy': 0.8130393096836049} |
| 0.0369 | 150.0 | 50250 | 1.2448 | {'f1': 0.8822381262199089} | {'accuracy': 0.8264621284755513} |
| 0.0365 | 151.0 | 50585 | 1.2979 | {'f1': 0.8795336787564768} | {'accuracy': 0.8216682646212847} |
| 0.0365 | 152.0 | 50920 | 1.2667 | {'f1': 0.8812459441920829} | {'accuracy': 0.8245445829338447} |
| 0.0308 | 153.0 | 51255 | 1.2065 | {'f1': 0.8820039551746869} | {'accuracy': 0.8283796740172579} |
| 0.0358 | 154.0 | 51590 | 1.0985 | {'f1': 0.8846153846153847} | {'accuracy': 0.8331735378715245} |
| 0.0358 | 155.0 | 51925 | 1.2635 | {'f1': 0.8786502271252432} | {'accuracy': 0.8207094918504314} |
| 0.0333 | 156.0 | 52260 | 1.4968 | {'f1': 0.8725674827369743} | {'accuracy': 0.8053691275167785} |
| 0.0473 | 157.0 | 52595 | 1.1381 | {'f1': 0.8814669286182057} | {'accuracy': 0.8264621284755513} |
| 0.0473 | 158.0 | 52930 | 1.4122 | {'f1': 0.8755583918315252} | {'accuracy': 0.8130393096836049} |
| 0.0294 | 159.0 | 53265 | 1.1408 | {'f1': 0.8804702808621816} | {'accuracy': 0.8245445829338447} |
| 0.0396 | 160.0 | 53600 | 1.2296 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0396 | 161.0 | 53935 | 1.2326 | {'f1': 0.8817065287653523} | {'accuracy': 0.8245445829338447} |
| 0.0376 | 162.0 | 54270 | 1.2225 | {'f1': 0.8793774319066148} | {'accuracy': 0.8216682646212847} |
| 0.0384 | 163.0 | 54605 | 1.3583 | {'f1': 0.8766603415559772} | {'accuracy': 0.8130393096836049} |
| 0.0384 | 164.0 | 54940 | 1.2230 | {'f1': 0.8830829523187459} | {'accuracy': 0.8283796740172579} |
| 0.0344 | 165.0 | 55275 | 1.1266 | {'f1': 0.8874259381171825} | {'accuracy': 0.8360498561840843} |
| 0.0316 | 166.0 | 55610 | 1.1692 | {'f1': 0.8835078534031414} | {'accuracy': 0.8293384467881112} |
| 0.0316 | 167.0 | 55945 | 1.1212 | {'f1': 0.886259040105194} | {'accuracy': 0.8341323106423778} |
| 0.0313 | 168.0 | 56280 | 1.1571 | {'f1': 0.8845654993514916} | {'accuracy': 0.8293384467881112} |
| 0.0277 | 169.0 | 56615 | 1.2483 | {'f1': 0.881859264041317} | {'accuracy': 0.8245445829338447} |
| 0.0277 | 170.0 | 56950 | 1.4021 | {'f1': 0.8757097791798107} | {'accuracy': 0.8111217641418984} |
| 0.0335 | 171.0 | 57285 | 1.4360 | {'f1': 0.8769035532994924} | {'accuracy': 0.8139980824544583} |
| 0.0283 | 172.0 | 57620 | 1.2176 | {'f1': 0.8829993535875889} | {'accuracy': 0.8264621284755513} |
| 0.0283 | 173.0 | 57955 | 1.2605 | {'f1': 0.8852883992222943} | {'accuracy': 0.8302972195589645} |
| 0.0265 | 174.0 | 58290 | 1.2043 | {'f1': 0.8878748370273793} | {'accuracy': 0.835091083413231} |
| 0.0346 | 175.0 | 58625 | 1.4101 | {'f1': 0.8780487804878049} | {'accuracy': 0.8178331735378715} |
| 0.0346 | 176.0 | 58960 | 1.3603 | {'f1': 0.878017789072427} | {'accuracy': 0.8159156279961649} |
| 0.0328 | 177.0 | 59295 | 1.4221 | {'f1': 0.8788265306122449} | {'accuracy': 0.8178331735378715} |
| 0.0415 | 178.0 | 59630 | 1.2330 | {'f1': 0.8838416612589227} | {'accuracy': 0.8283796740172579} |
| 0.0415 | 179.0 | 59965 | 1.3318 | {'f1': 0.8837209302325582} | {'accuracy': 0.8274209012464045} |
| 0.0273 | 180.0 | 60300 | 1.3779 | {'f1': 0.8800513149454778} | {'accuracy': 0.8207094918504314} |
| 0.0284 | 181.0 | 60635 | 1.3195 | {'f1': 0.8792207792207791} | {'accuracy': 0.8216682646212847} |
| 0.0284 | 182.0 | 60970 | 1.3557 | {'f1': 0.8810289389067525} | {'accuracy': 0.822627037392138} |
| 0.0306 | 183.0 | 61305 | 1.5289 | {'f1': 0.8743654822335025} | {'accuracy': 0.8101629913710451} |
| 0.027 | 184.0 | 61640 | 1.3811 | {'f1': 0.8828478964401294} | {'accuracy': 0.8264621284755513} |
| 0.027 | 185.0 | 61975 | 1.4146 | {'f1': 0.8764478764478765} | {'accuracy': 0.8159156279961649} |
| 0.0267 | 186.0 | 62310 | 1.3836 | {'f1': 0.8814432989690721} | {'accuracy': 0.8235858101629914} |
| 0.0226 | 187.0 | 62645 | 1.3915 | {'f1': 0.8805681084570691} | {'accuracy': 0.822627037392138} |
| 0.0226 | 188.0 | 62980 | 1.3690 | {'f1': 0.877284595300261} | {'accuracy': 0.8197507190795782} |
| 0.024 | 189.0 | 63315 | 1.5066 | {'f1': 0.8773946360153257} | {'accuracy': 0.8159156279961649} |
| 0.0275 | 190.0 | 63650 | 1.3999 | {'f1': 0.8808757244043787} | {'accuracy': 0.822627037392138} |
| 0.0275 | 191.0 | 63985 | 1.3867 | {'f1': 0.8758002560819463} | {'accuracy': 0.8139980824544583} |
| 0.0274 | 192.0 | 64320 | 1.3696 | {'f1': 0.8795878943979395} | {'accuracy': 0.8207094918504314} |
| 0.0274 | 193.0 | 64655 | 1.4807 | {'f1': 0.878017789072427} | {'accuracy': 0.8159156279961649} |
| 0.0274 | 194.0 | 64990 | 1.2959 | {'f1': 0.88296488946684} | {'accuracy': 0.8274209012464045} |
| 0.0276 | 195.0 | 65325 | 1.3492 | {'f1': 0.8841423948220065} | {'accuracy': 0.8283796740172579} |
| 0.0238 | 196.0 | 65660 | 1.3131 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0238 | 197.0 | 65995 | 1.3334 | {'f1': 0.8845654993514916} | {'accuracy': 0.8293384467881112} |
| 0.0216 | 198.0 | 66330 | 1.2434 | {'f1': 0.881399870382372} | {'accuracy': 0.8245445829338447} |
| 0.0337 | 199.0 | 66665 | 1.3533 | {'f1': 0.879177377892031} | {'accuracy': 0.8197507190795782} |
| 0.0204 | 200.0 | 67000 | 1.2521 | {'f1': 0.8877216021011162} | {'accuracy': 0.8360498561840843} |
| 0.0204 | 201.0 | 67335 | 1.3449 | {'f1': 0.8873056994818653} | {'accuracy': 0.8331735378715245} |
| 0.0276 | 202.0 | 67670 | 1.3239 | {'f1': 0.88671875} | {'accuracy': 0.8331735378715245} |
| 0.0224 | 203.0 | 68005 | 1.3420 | {'f1': 0.8841145833333333} | {'accuracy': 0.8293384467881112} |
| 0.0224 | 204.0 | 68340 | 1.3636 | {'f1': 0.8847150259067358} | {'accuracy': 0.8293384467881112} |
| 0.0226 | 205.0 | 68675 | 1.3225 | {'f1': 0.8871493803000652} | {'accuracy': 0.8341323106423778} |
| 0.0233 | 206.0 | 69010 | 1.3964 | {'f1': 0.8820116054158607} | {'accuracy': 0.8245445829338447} |
| 0.0233 | 207.0 | 69345 | 1.4129 | {'f1': 0.8854368932038835} | {'accuracy': 0.8302972195589645} |
| 0.0205 | 208.0 | 69680 | 1.2687 | {'f1': 0.8868660598179455} | {'accuracy': 0.8331735378715245} |
| 0.023 | 209.0 | 70015 | 1.5126 | {'f1': 0.8762626262626263} | {'accuracy': 0.8120805369127517} |
| 0.023 | 210.0 | 70350 | 1.2341 | {'f1': 0.886259040105194} | {'accuracy': 0.8341323106423778} |
| 0.0204 | 211.0 | 70685 | 1.3646 | {'f1': 0.8844155844155843} | {'accuracy': 0.8293384467881112} |
| 0.0219 | 212.0 | 71020 | 1.3274 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0219 | 213.0 | 71355 | 1.3338 | {'f1': 0.8836012861736334} | {'accuracy': 0.8264621284755513} |
| 0.0253 | 214.0 | 71690 | 1.2093 | {'f1': 0.8887425938117183} | {'accuracy': 0.837967401725791} |
| 0.0231 | 215.0 | 72025 | 1.5109 | {'f1': 0.879746835443038} | {'accuracy': 0.8178331735378715} |
| 0.0231 | 216.0 | 72360 | 1.2282 | {'f1': 0.8838709677419355} | {'accuracy': 0.8274209012464045} |
| 0.0287 | 217.0 | 72695 | 1.2655 | {'f1': 0.8856956237753103} | {'accuracy': 0.8322147651006712} |
| 0.0192 | 218.0 | 73030 | 1.4157 | {'f1': 0.8815533980582524} | {'accuracy': 0.8245445829338447} |
| 0.0192 | 219.0 | 73365 | 1.3811 | {'f1': 0.8845401174168297} | {'accuracy': 0.8302972195589645} |
| 0.0259 | 220.0 | 73700 | 1.3085 | {'f1': 0.8872870249017039} | {'accuracy': 0.835091083413231} |
| 0.0167 | 221.0 | 74035 | 1.4609 | {'f1': 0.8813341885824246} | {'accuracy': 0.822627037392138} |
| 0.0167 | 222.0 | 74370 | 1.2850 | {'f1': 0.8858085808580859} | {'accuracy': 0.8341323106423778} |
| 0.0197 | 223.0 | 74705 | 1.4649 | {'f1': 0.8829174664107486} | {'accuracy': 0.8245445829338447} |
| 0.0202 | 224.0 | 75040 | 1.1894 | {'f1': 0.8812877263581489} | {'accuracy': 0.8302972195589645} |
| 0.0202 | 225.0 | 75375 | 1.3179 | {'f1': 0.8798972382787411} | {'accuracy': 0.8207094918504314} |
| 0.0237 | 226.0 | 75710 | 1.3290 | {'f1': 0.8823529411764706} | {'accuracy': 0.8235858101629914} |
| 0.021 | 227.0 | 76045 | 1.4850 | {'f1': 0.8763474952441344} | {'accuracy': 0.8130393096836049} |
| 0.021 | 228.0 | 76380 | 1.4027 | {'f1': 0.8842921784098255} | {'accuracy': 0.8283796740172579} |
| 0.0177 | 229.0 | 76715 | 1.4112 | {'f1': 0.8809831824062095} | {'accuracy': 0.8235858101629914} |
| 0.0172 | 230.0 | 77050 | 1.3582 | {'f1': 0.88296488946684} | {'accuracy': 0.8274209012464045} |
| 0.0172 | 231.0 | 77385 | 1.4028 | {'f1': 0.8788265306122449} | {'accuracy': 0.8178331735378715} |
| 0.0195 | 232.0 | 77720 | 1.4775 | {'f1': 0.8825806451612904} | {'accuracy': 0.825503355704698} |
| 0.0154 | 233.0 | 78055 | 1.6026 | {'f1': 0.8814862267777066} | {'accuracy': 0.822627037392138} |
| 0.0154 | 234.0 | 78390 | 1.3391 | {'f1': 0.8861092824226466} | {'accuracy': 0.8341323106423778} |
| 0.0193 | 235.0 | 78725 | 1.4912 | {'f1': 0.875724404378622} | {'accuracy': 0.8149568552253116} |
| 0.0164 | 236.0 | 79060 | 1.4871 | {'f1': 0.881399870382372} | {'accuracy': 0.8245445829338447} |
| 0.0164 | 237.0 | 79395 | 1.4251 | {'f1': 0.8823529411764706} | {'accuracy': 0.8274209012464045} |
| 0.0187 | 238.0 | 79730 | 1.3903 | {'f1': 0.8833551769331585} | {'accuracy': 0.8293384467881112} |
| 0.0178 | 239.0 | 80065 | 1.3270 | {'f1': 0.8825438027255029} | {'accuracy': 0.8264621284755513} |
| 0.0178 | 240.0 | 80400 | 1.5414 | {'f1': 0.8796414852752881} | {'accuracy': 0.8197507190795782} |
| 0.0126 | 241.0 | 80735 | 1.4967 | {'f1': 0.8785166240409207} | {'accuracy': 0.8178331735378715} |
| 0.0155 | 242.0 | 81070 | 1.4005 | {'f1': 0.8832020997375328} | {'accuracy': 0.8293384467881112} |
| 0.0155 | 243.0 | 81405 | 1.6056 | {'f1': 0.8766066838046273} | {'accuracy': 0.8159156279961649} |
| 0.0147 | 244.0 | 81740 | 1.6719 | {'f1': 0.8766773162939298} | {'accuracy': 0.8149568552253116} |
| 0.0226 | 245.0 | 82075 | 1.4974 | {'f1': 0.8806161745827985} | {'accuracy': 0.8216682646212847} |
| 0.0226 | 246.0 | 82410 | 1.5430 | {'f1': 0.8795878943979395} | {'accuracy': 0.8207094918504314} |
| 0.0171 | 247.0 | 82745 | 1.3726 | {'f1': 0.8817065287653523} | {'accuracy': 0.8245445829338447} |
| 0.0226 | 248.0 | 83080 | 1.4351 | {'f1': 0.8820116054158607} | {'accuracy': 0.8245445829338447} |
| 0.0226 | 249.0 | 83415 | 1.4010 | {'f1': 0.8841145833333333} | {'accuracy': 0.8293384467881112} |
| 0.0237 | 250.0 | 83750 | 1.4589 | {'f1': 0.8804137039431158} | {'accuracy': 0.822627037392138} |
| 0.014 | 251.0 | 84085 | 1.3529 | {'f1': 0.8786127167630058} | {'accuracy': 0.8187919463087249} |
| 0.014 | 252.0 | 84420 | 1.4724 | {'f1': 0.8793873643905552} | {'accuracy': 0.8187919463087249} |
| 0.0203 | 253.0 | 84755 | 1.4837 | {'f1': 0.8796414852752881} | {'accuracy': 0.8197507190795782} |
| 0.0148 | 254.0 | 85090 | 1.8253 | {'f1': 0.8717948717948719} | {'accuracy': 0.8034515819750719} |
| 0.0148 | 255.0 | 85425 | 1.4304 | {'f1': 0.8815104166666666} | {'accuracy': 0.825503355704698} |
| 0.0149 | 256.0 | 85760 | 1.5129 | {'f1': 0.884887459807074} | {'accuracy': 0.8283796740172579} |
| 0.0122 | 257.0 | 86095 | 1.5338 | {'f1': 0.8817065287653523} | {'accuracy': 0.8245445829338447} |
| 0.0122 | 258.0 | 86430 | 1.4792 | {'f1': 0.8805194805194805} | {'accuracy': 0.8235858101629914} |
| 0.0162 | 259.0 | 86765 | 1.4745 | {'f1': 0.8821243523316061} | {'accuracy': 0.825503355704698} |
| 0.0208 | 260.0 | 87100 | 1.4651 | {'f1': 0.8835393623942747} | {'accuracy': 0.8283796740172579} |
| 0.0208 | 261.0 | 87435 | 1.3562 | {'f1': 0.8837820091923834} | {'accuracy': 0.8302972195589645} |
| 0.015 | 262.0 | 87770 | 1.4761 | {'f1': 0.8794871794871796} | {'accuracy': 0.8197507190795782} |
| 0.0143 | 263.0 | 88105 | 1.5374 | {'f1': 0.880722114764668} | {'accuracy': 0.822627037392138} |
| 0.0143 | 264.0 | 88440 | 1.3936 | {'f1': 0.8820445609436436} | {'accuracy': 0.8274209012464045} |
| 0.0184 | 265.0 | 88775 | 1.5026 | {'f1': 0.8801546391752577} | {'accuracy': 0.8216682646212847} |
| 0.0161 | 266.0 | 89110 | 1.4669 | {'f1': 0.8784925276153347} | {'accuracy': 0.8207094918504314} |
| 0.0161 | 267.0 | 89445 | 1.5627 | {'f1': 0.8757921419518377} | {'accuracy': 0.8120805369127517} |
| 0.0216 | 268.0 | 89780 | 1.3037 | {'f1': 0.8843626806833115} | {'accuracy': 0.8312559923298178} |
| 0.0137 | 269.0 | 90115 | 1.6394 | {'f1': 0.8765743073047859} | {'accuracy': 0.8120805369127517} |
| 0.0137 | 270.0 | 90450 | 1.4684 | {'f1': 0.881859264041317} | {'accuracy': 0.8245445829338447} |
| 0.0164 | 271.0 | 90785 | 1.3730 | {'f1': 0.8829993535875889} | {'accuracy': 0.8264621284755513} |
| 0.0148 | 272.0 | 91120 | 1.5189 | {'f1': 0.8789237668161436} | {'accuracy': 0.8187919463087249} |
| 0.0148 | 273.0 | 91455 | 1.2963 | {'f1': 0.8860589812332439} | {'accuracy': 0.8370086289549377} |
| 0.0131 | 274.0 | 91790 | 1.5332 | {'f1': 0.879177377892031} | {'accuracy': 0.8197507190795782} |
| 0.0125 | 275.0 | 92125 | 1.5472 | {'f1': 0.8814862267777066} | {'accuracy': 0.822627037392138} |
| 0.0125 | 276.0 | 92460 | 1.4647 | {'f1': 0.881201044386423} | {'accuracy': 0.825503355704698} |
| 0.0153 | 277.0 | 92795 | 1.4001 | {'f1': 0.8852459016393444} | {'accuracy': 0.8322147651006712} |
| 0.0118 | 278.0 | 93130 | 1.5457 | {'f1': 0.8800513149454778} | {'accuracy': 0.8207094918504314} |
| 0.0118 | 279.0 | 93465 | 1.3179 | {'f1': 0.886259040105194} | {'accuracy': 0.8341323106423778} |
| 0.0167 | 280.0 | 93800 | 1.5365 | {'f1': 0.8824289405684754} | {'accuracy': 0.825503355704698} |
| 0.0109 | 281.0 | 94135 | 1.5775 | {'f1': 0.8783610755441742} | {'accuracy': 0.8178331735378715} |
| 0.0109 | 282.0 | 94470 | 1.4469 | {'f1': 0.8855656697009102} | {'accuracy': 0.8312559923298178} |
| 0.0127 | 283.0 | 94805 | 1.5806 | {'f1': 0.8748403575989783} | {'accuracy': 0.8120805369127517} |
| 0.0167 | 284.0 | 95140 | 1.4599 | {'f1': 0.8780487804878049} | {'accuracy': 0.8178331735378715} |
| 0.0167 | 285.0 | 95475 | 1.3209 | {'f1': 0.8858085808580859} | {'accuracy': 0.8341323106423778} |
| 0.019 | 286.0 | 95810 | 1.4650 | {'f1': 0.8772378516624041} | {'accuracy': 0.8159156279961649} |
| 0.0147 | 287.0 | 96145 | 1.5146 | {'f1': 0.8800513149454778} | {'accuracy': 0.8207094918504314} |
| 0.0147 | 288.0 | 96480 | 1.5135 | {'f1': 0.8818998716302953} | {'accuracy': 0.8235858101629914} |
| 0.0144 | 289.0 | 96815 | 1.5631 | {'f1': 0.8807692307692307} | {'accuracy': 0.8216682646212847} |
| 0.012 | 290.0 | 97150 | 1.4717 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.012 | 291.0 | 97485 | 1.4136 | {'f1': 0.8817766165904638} | {'accuracy': 0.8264621284755513} |
| 0.01 | 292.0 | 97820 | 1.6090 | {'f1': 0.8755583918315252} | {'accuracy': 0.8130393096836049} |
| 0.0182 | 293.0 | 98155 | 1.3377 | {'f1': 0.8845144356955381} | {'accuracy': 0.8312559923298178} |
| 0.0182 | 294.0 | 98490 | 1.4140 | {'f1': 0.8867059593975114} | {'accuracy': 0.8341323106423778} |
| 0.0134 | 295.0 | 98825 | 1.5533 | {'f1': 0.8790786948176584} | {'accuracy': 0.8187919463087249} |
| 0.0142 | 296.0 | 99160 | 1.4165 | {'f1': 0.8817345597897503} | {'accuracy': 0.8274209012464045} |
| 0.0142 | 297.0 | 99495 | 1.4488 | {'f1': 0.8802083333333333} | {'accuracy': 0.8235858101629914} |
| 0.013 | 298.0 | 99830 | 1.5705 | {'f1': 0.8777348777348777} | {'accuracy': 0.8178331735378715} |
| 0.0116 | 299.0 | 100165 | 1.4775 | {'f1': 0.8768020969855832} | {'accuracy': 0.8197507190795782} |
| 0.0177 | 300.0 | 100500 | 1.4692 | {'f1': 0.8817065287653523} | {'accuracy': 0.8245445829338447} |
| 0.0177 | 301.0 | 100835 | 1.4669 | {'f1': 0.8828125} | {'accuracy': 0.8274209012464045} |
| 0.0162 | 302.0 | 101170 | 1.4569 | {'f1': 0.8837820091923834} | {'accuracy': 0.8302972195589645} |
| 0.0154 | 303.0 | 101505 | 1.5704 | {'f1': 0.8745198463508322} | {'accuracy': 0.8120805369127517} |
| 0.0154 | 304.0 | 101840 | 1.5991 | {'f1': 0.8770806658130602} | {'accuracy': 0.8159156279961649} |
| 0.0149 | 305.0 | 102175 | 1.6488 | {'f1': 0.8747616020343293} | {'accuracy': 0.8111217641418984} |
| 0.0157 | 306.0 | 102510 | 1.6070 | {'f1': 0.8697850821744627} | {'accuracy': 0.8024928092042186} |
| 0.0157 | 307.0 | 102845 | 1.4363 | {'f1': 0.8815533980582524} | {'accuracy': 0.8245445829338447} |
| 0.0109 | 308.0 | 103180 | 1.4503 | {'f1': 0.8785529715762274} | {'accuracy': 0.8197507190795782} |
| 0.0148 | 309.0 | 103515 | 1.5801 | {'f1': 0.869123252858958} | {'accuracy': 0.8024928092042186} |
| 0.0148 | 310.0 | 103850 | 1.4189 | {'f1': 0.8791639451338994} | {'accuracy': 0.822627037392138} |
| 0.0153 | 311.0 | 104185 | 1.4453 | {'f1': 0.8752424046541694} | {'accuracy': 0.8149568552253116} |
| 0.0126 | 312.0 | 104520 | 1.6206 | {'f1': 0.8740458015267175} | {'accuracy': 0.8101629913710451} |
| 0.0126 | 313.0 | 104855 | 1.4516 | {'f1': 0.8765352294764059} | {'accuracy': 0.8168744007670182} |
| 0.0165 | 314.0 | 105190 | 1.3816 | {'f1': 0.8804702808621816} | {'accuracy': 0.8245445829338447} |
| 0.0165 | 315.0 | 105525 | 1.3595 | {'f1': 0.8826597131681878} | {'accuracy': 0.8274209012464045} |
| 0.0165 | 316.0 | 105860 | 1.5510 | {'f1': 0.8782664117272148} | {'accuracy': 0.8168744007670182} |
| 0.0123 | 317.0 | 106195 | 1.3590 | {'f1': 0.882084690553746} | {'accuracy': 0.8264621284755513} |
| 0.0132 | 318.0 | 106530 | 1.2702 | {'f1': 0.8878627968337731} | {'accuracy': 0.8370086289549377} |
| 0.0132 | 319.0 | 106865 | 1.4241 | {'f1': 0.8840206185567011} | {'accuracy': 0.8274209012464045} |
| 0.0136 | 320.0 | 107200 | 1.3627 | {'f1': 0.8858447488584474} | {'accuracy': 0.8322147651006712} |
| 0.0161 | 321.0 | 107535 | 1.3967 | {'f1': 0.881399870382372} | {'accuracy': 0.8245445829338447} |
| 0.0161 | 322.0 | 107870 | 1.6389 | {'f1': 0.8734177215189873} | {'accuracy': 0.8082454458293384} |
| 0.0121 | 323.0 | 108205 | 1.3805 | {'f1': 0.8831168831168831} | {'accuracy': 0.8274209012464045} |
| 0.0085 | 324.0 | 108540 | 1.4762 | {'f1': 0.8831168831168831} | {'accuracy': 0.8274209012464045} |
| 0.0085 | 325.0 | 108875 | 1.4123 | {'f1': 0.8839634941329856} | {'accuracy': 0.8293384467881112} |
| 0.0111 | 326.0 | 109210 | 1.4110 | {'f1': 0.8810916179337233} | {'accuracy': 0.8245445829338447} |
| 0.009 | 327.0 | 109545 | 1.4012 | {'f1': 0.8872870249017039} | {'accuracy': 0.835091083413231} |
| 0.009 | 328.0 | 109880 | 1.3711 | {'f1': 0.8807817589576548} | {'accuracy': 0.8245445829338447} |
| 0.0114 | 329.0 | 110215 | 1.4617 | {'f1': 0.8825065274151437} | {'accuracy': 0.8274209012464045} |
| 0.0115 | 330.0 | 110550 | 1.7397 | {'f1': 0.8748435544430538} | {'accuracy': 0.8082454458293384} |
| 0.0115 | 331.0 | 110885 | 1.4449 | {'f1': 0.8847150259067358} | {'accuracy': 0.8293384467881112} |
| 0.0143 | 332.0 | 111220 | 1.4136 | {'f1': 0.88296488946684} | {'accuracy': 0.8274209012464045} |
| 0.0145 | 333.0 | 111555 | 1.2394 | {'f1': 0.8881535407015221} | {'accuracy': 0.837967401725791} |
| 0.0145 | 334.0 | 111890 | 1.3683 | {'f1': 0.882084690553746} | {'accuracy': 0.8264621284755513} |
| 0.0118 | 335.0 | 112225 | 1.5259 | {'f1': 0.8744479495268138} | {'accuracy': 0.8092042186001918} |
| 0.0124 | 336.0 | 112560 | 1.1697 | {'f1': 0.8941018766756033} | {'accuracy': 0.8485139022051774} |
| 0.0124 | 337.0 | 112895 | 1.3062 | {'f1': 0.8845654993514916} | {'accuracy': 0.8293384467881112} |
| 0.0122 | 338.0 | 113230 | 1.2246 | {'f1': 0.888888888888889} | {'accuracy': 0.8398849472674976} |
| 0.0145 | 339.0 | 113565 | 1.3319 | {'f1': 0.8839050131926122} | {'accuracy': 0.8312559923298178} |
| 0.0145 | 340.0 | 113900 | 1.3524 | {'f1': 0.8826597131681878} | {'accuracy': 0.8274209012464045} |
| 0.0116 | 341.0 | 114235 | 1.5009 | {'f1': 0.8783000643915004} | {'accuracy': 0.8187919463087249} |
| 0.0157 | 342.0 | 114570 | 1.3951 | {'f1': 0.8808757244043787} | {'accuracy': 0.822627037392138} |
| 0.0157 | 343.0 | 114905 | 1.3430 | {'f1': 0.8872870249017039} | {'accuracy': 0.835091083413231} |
| 0.0066 | 344.0 | 115240 | 1.3878 | {'f1': 0.8845401174168297} | {'accuracy': 0.8302972195589645} |
| 0.0125 | 345.0 | 115575 | 1.4196 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0125 | 346.0 | 115910 | 1.5421 | {'f1': 0.8841698841698842} | {'accuracy': 0.8274209012464045} |
| 0.0047 | 347.0 | 116245 | 1.3159 | {'f1': 0.8905206942590119} | {'accuracy': 0.8427612655800575} |
| 0.0115 | 348.0 | 116580 | 1.4889 | {'f1': 0.88296488946684} | {'accuracy': 0.8274209012464045} |
| 0.0115 | 349.0 | 116915 | 1.4914 | {'f1': 0.8841423948220065} | {'accuracy': 0.8283796740172579} |
| 0.0094 | 350.0 | 117250 | 1.4304 | {'f1': 0.8804137039431158} | {'accuracy': 0.822627037392138} |
| 0.0135 | 351.0 | 117585 | 1.5763 | {'f1': 0.8782664117272148} | {'accuracy': 0.8168744007670182} |
| 0.0135 | 352.0 | 117920 | 1.4326 | {'f1': 0.8831835686777921} | {'accuracy': 0.825503355704698} |
| 0.0159 | 353.0 | 118255 | 1.5209 | {'f1': 0.8804071246819339} | {'accuracy': 0.8197507190795782} |
| 0.0121 | 354.0 | 118590 | 1.4481 | {'f1': 0.8821243523316061} | {'accuracy': 0.825503355704698} |
| 0.0121 | 355.0 | 118925 | 1.3729 | {'f1': 0.8883013879709186} | {'accuracy': 0.837967401725791} |
| 0.0076 | 356.0 | 119260 | 1.4068 | {'f1': 0.8851174934725848} | {'accuracy': 0.8312559923298178} |
| 0.011 | 357.0 | 119595 | 1.5804 | {'f1': 0.884318766066838} | {'accuracy': 0.8274209012464045} |
| 0.011 | 358.0 | 119930 | 1.4468 | {'f1': 0.8861418347430059} | {'accuracy': 0.8322147651006712} |
| 0.0104 | 359.0 | 120265 | 1.5484 | {'f1': 0.8794871794871796} | {'accuracy': 0.8197507190795782} |
| 0.0098 | 360.0 | 120600 | 1.5450 | {'f1': 0.8806161745827985} | {'accuracy': 0.8216682646212847} |
| 0.0098 | 361.0 | 120935 | 1.4119 | {'f1': 0.8855461085676913} | {'accuracy': 0.8322147651006712} |
| 0.01 | 362.0 | 121270 | 1.5413 | {'f1': 0.8811817597944765} | {'accuracy': 0.822627037392138} |
| 0.0057 | 363.0 | 121605 | 1.4996 | {'f1': 0.8835078534031414} | {'accuracy': 0.8293384467881112} |
| 0.0057 | 364.0 | 121940 | 1.5304 | {'f1': 0.8804137039431158} | {'accuracy': 0.822627037392138} |
| 0.0083 | 365.0 | 122275 | 1.4527 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0128 | 366.0 | 122610 | 1.3604 | {'f1': 0.8884488448844885} | {'accuracy': 0.837967401725791} |
| 0.0128 | 367.0 | 122945 | 1.4389 | {'f1': 0.8837209302325582} | {'accuracy': 0.8274209012464045} |
| 0.0108 | 368.0 | 123280 | 1.5141 | {'f1': 0.8828828828828829} | {'accuracy': 0.825503355704698} |
| 0.0137 | 369.0 | 123615 | 1.3898 | {'f1': 0.8823911630929175} | {'accuracy': 0.8264621284755513} |
| 0.0137 | 370.0 | 123950 | 1.3869 | {'f1': 0.8839634941329856} | {'accuracy': 0.8293384467881112} |
| 0.0075 | 371.0 | 124285 | 1.3627 | {'f1': 0.8856767411300921} | {'accuracy': 0.8331735378715245} |
| 0.0075 | 372.0 | 124620 | 1.4736 | {'f1': 0.8809368900455433} | {'accuracy': 0.8245445829338447} |
| 0.0075 | 373.0 | 124955 | 1.4527 | {'f1': 0.8838120104438641} | {'accuracy': 0.8293384467881112} |
| 0.0085 | 374.0 | 125290 | 1.5817 | {'f1': 0.8835705045278138} | {'accuracy': 0.8274209012464045} |
| 0.0079 | 375.0 | 125625 | 1.6168 | {'f1': 0.8815958815958816} | {'accuracy': 0.8235858101629914} |
| 0.0079 | 376.0 | 125960 | 1.5068 | {'f1': 0.8819714656290532} | {'accuracy': 0.825503355704698} |
| 0.0083 | 377.0 | 126295 | 1.5055 | {'f1': 0.8832684824902725} | {'accuracy': 0.8274209012464045} |
| 0.0115 | 378.0 | 126630 | 1.5258 | {'f1': 0.8828828828828829} | {'accuracy': 0.825503355704698} |
| 0.0115 | 379.0 | 126965 | 1.6030 | {'f1': 0.8752399232245682} | {'accuracy': 0.8130393096836049} |
| 0.007 | 380.0 | 127300 | 1.5607 | {'f1': 0.8759590792838876} | {'accuracy': 0.8139980824544583} |
| 0.0115 | 381.0 | 127635 | 1.4875 | {'f1': 0.8828478964401294} | {'accuracy': 0.8264621284755513} |
| 0.0115 | 382.0 | 127970 | 1.4510 | {'f1': 0.8798449612403101} | {'accuracy': 0.8216682646212847} |
| 0.0086 | 383.0 | 128305 | 1.3703 | {'f1': 0.8815789473684211} | {'accuracy': 0.8274209012464045} |
| 0.0071 | 384.0 | 128640 | 1.4905 | {'f1': 0.8827319587628866} | {'accuracy': 0.825503355704698} |
| 0.0071 | 385.0 | 128975 | 1.5289 | {'f1': 0.8811817597944765} | {'accuracy': 0.822627037392138} |
| 0.0066 | 386.0 | 129310 | 1.5711 | {'f1': 0.8780487804878049} | {'accuracy': 0.8178331735378715} |
| 0.0105 | 387.0 | 129645 | 1.6893 | {'f1': 0.8701134930643127} | {'accuracy': 0.8024928092042186} |
| 0.0105 | 388.0 | 129980 | 1.5360 | {'f1': 0.8800000000000001} | {'accuracy': 0.8216682646212847} |
| 0.0121 | 389.0 | 130315 | 1.4854 | {'f1': 0.8792769528728211} | {'accuracy': 0.8207094918504314} |
| 0.0095 | 390.0 | 130650 | 1.4741 | {'f1': 0.881664499349805} | {'accuracy': 0.825503355704698} |
| 0.0095 | 391.0 | 130985 | 1.5628 | {'f1': 0.8763613068545804} | {'accuracy': 0.8149568552253116} |
| 0.009 | 392.0 | 131320 | 1.5560 | {'f1': 0.8811817597944765} | {'accuracy': 0.822627037392138} |
| 0.0079 | 393.0 | 131655 | 1.4950 | {'f1': 0.8825065274151437} | {'accuracy': 0.8274209012464045} |
| 0.0079 | 394.0 | 131990 | 1.5270 | {'f1': 0.8784925276153347} | {'accuracy': 0.8207094918504314} |
| 0.0077 | 395.0 | 132325 | 1.5941 | {'f1': 0.8751608751608753} | {'accuracy': 0.8139980824544583} |
| 0.0097 | 396.0 | 132660 | 1.4690 | {'f1': 0.8795811518324608} | {'accuracy': 0.8235858101629914} |
| 0.0097 | 397.0 | 132995 | 1.4459 | {'f1': 0.8812664907651715} | {'accuracy': 0.8274209012464045} |
| 0.0051 | 398.0 | 133330 | 1.4213 | {'f1': 0.8822751322751322} | {'accuracy': 0.8293384467881112} |
| 0.0079 | 399.0 | 133665 | 1.4610 | {'f1': 0.8840291583830351} | {'accuracy': 0.8322147651006712} |
| 0.0066 | 400.0 | 134000 | 1.5880 | {'f1': 0.8771704180064309} | {'accuracy': 0.8168744007670182} |
| 0.0066 | 401.0 | 134335 | 1.7606 | {'f1': 0.874442319949012} | {'accuracy': 0.8111217641418984} |
| 0.0139 | 402.0 | 134670 | 1.5978 | {'f1': 0.8766946417043254} | {'accuracy': 0.8168744007670182} |
| 0.0078 | 403.0 | 135005 | 1.4467 | {'f1': 0.8831683168316831} | {'accuracy': 0.8302972195589645} |
| 0.0078 | 404.0 | 135340 | 1.4775 | {'f1': 0.8838752488387525} | {'accuracy': 0.8322147651006712} |
| 0.0069 | 405.0 | 135675 | 1.6677 | {'f1': 0.8795878943979395} | {'accuracy': 0.8207094918504314} |
| 0.0074 | 406.0 | 136010 | 1.6645 | {'f1': 0.8765352294764059} | {'accuracy': 0.8168744007670182} |
| 0.0074 | 407.0 | 136345 | 1.7429 | {'f1': 0.8766946417043254} | {'accuracy': 0.8168744007670182} |
| 0.0103 | 408.0 | 136680 | 1.5847 | {'f1': 0.8773523685918235} | {'accuracy': 0.8187919463087249} |
| 0.0102 | 409.0 | 137015 | 1.5852 | {'f1': 0.8768536428110896} | {'accuracy': 0.8168744007670182} |
| 0.0102 | 410.0 | 137350 | 1.6428 | {'f1': 0.8786127167630058} | {'accuracy': 0.8187919463087249} |
| 0.0061 | 411.0 | 137685 | 1.4875 | {'f1': 0.8840579710144927} | {'accuracy': 0.8312559923298178} |
| 0.0104 | 412.0 | 138020 | 1.5198 | {'f1': 0.8819308545335943} | {'accuracy': 0.8264621284755513} |
| 0.0104 | 413.0 | 138355 | 1.6076 | {'f1': 0.8770122343850613} | {'accuracy': 0.8168744007670182} |
| 0.0086 | 414.0 | 138690 | 1.5185 | {'f1': 0.8823911630929175} | {'accuracy': 0.8264621284755513} |
| 0.0093 | 415.0 | 139025 | 1.4091 | {'f1': 0.8880053015241882} | {'accuracy': 0.837967401725791} |
| 0.0093 | 416.0 | 139360 | 1.5178 | {'f1': 0.8819308545335943} | {'accuracy': 0.8264621284755513} |
| 0.0083 | 417.0 | 139695 | 1.5196 | {'f1': 0.8809368900455433} | {'accuracy': 0.8245445829338447} |
| 0.0058 | 418.0 | 140030 | 1.5603 | {'f1': 0.8818181818181818} | {'accuracy': 0.825503355704698} |
| 0.0058 | 419.0 | 140365 | 1.4341 | {'f1': 0.8874172185430463} | {'accuracy': 0.8370086289549377} |
| 0.0092 | 420.0 | 140700 | 1.4512 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0156 | 421.0 | 141035 | 1.4767 | {'f1': 0.8832684824902725} | {'accuracy': 0.8274209012464045} |
| 0.0156 | 422.0 | 141370 | 1.4180 | {'f1': 0.8833551769331585} | {'accuracy': 0.8293384467881112} |
| 0.0084 | 423.0 | 141705 | 1.4753 | {'f1': 0.8830829523187459} | {'accuracy': 0.8283796740172579} |
| 0.01 | 424.0 | 142040 | 1.4568 | {'f1': 0.8851395197923426} | {'accuracy': 0.8302972195589645} |
| 0.01 | 425.0 | 142375 | 1.4286 | {'f1': 0.8830829523187459} | {'accuracy': 0.8283796740172579} |
| 0.006 | 426.0 | 142710 | 1.4707 | {'f1': 0.8797385620915033} | {'accuracy': 0.8235858101629914} |
| 0.0057 | 427.0 | 143045 | 1.6428 | {'f1': 0.8773946360153257} | {'accuracy': 0.8159156279961649} |
| 0.0057 | 428.0 | 143380 | 1.4762 | {'f1': 0.8832354859752121} | {'accuracy': 0.8283796740172579} |
| 0.008 | 429.0 | 143715 | 1.5212 | {'f1': 0.8803641092327698} | {'accuracy': 0.8235858101629914} |
| 0.0052 | 430.0 | 144050 | 1.4851 | {'f1': 0.8818897637795275} | {'accuracy': 0.8274209012464045} |
| 0.0052 | 431.0 | 144385 | 1.6304 | {'f1': 0.8774193548387096} | {'accuracy': 0.8178331735378715} |
| 0.0089 | 432.0 | 144720 | 1.4639 | {'f1': 0.8836291913214991} | {'accuracy': 0.8302972195589645} |
| 0.0069 | 433.0 | 145055 | 1.6646 | {'f1': 0.8756410256410255} | {'accuracy': 0.8139980824544583} |
| 0.0069 | 434.0 | 145390 | 1.5078 | {'f1': 0.8816219751471549} | {'accuracy': 0.8264621284755513} |
| 0.0058 | 435.0 | 145725 | 1.6814 | {'f1': 0.874442319949012} | {'accuracy': 0.8111217641418984} |
| 0.0071 | 436.0 | 146060 | 1.5512 | {'f1': 0.8820445609436436} | {'accuracy': 0.8274209012464045} |
| 0.0071 | 437.0 | 146395 | 1.4773 | {'f1': 0.8827766863130322} | {'accuracy': 0.8283796740172579} |
| 0.0111 | 438.0 | 146730 | 1.5255 | {'f1': 0.8797920727745288} | {'accuracy': 0.822627037392138} |
| 0.0071 | 439.0 | 147065 | 1.5505 | {'f1': 0.8809831824062095} | {'accuracy': 0.8235858101629914} |
| 0.0071 | 440.0 | 147400 | 1.4674 | {'f1': 0.8830486202365309} | {'accuracy': 0.8293384467881112} |
| 0.0075 | 441.0 | 147735 | 1.5285 | {'f1': 0.8790058862001309} | {'accuracy': 0.822627037392138} |
| 0.0096 | 442.0 | 148070 | 1.5072 | {'f1': 0.8832020997375328} | {'accuracy': 0.8293384467881112} |
| 0.0096 | 443.0 | 148405 | 1.4909 | {'f1': 0.8820445609436436} | {'accuracy': 0.8274209012464045} |
| 0.0085 | 444.0 | 148740 | 1.6459 | {'f1': 0.8762755102040816} | {'accuracy': 0.8139980824544583} |
| 0.0127 | 445.0 | 149075 | 1.4276 | {'f1': 0.8818897637795275} | {'accuracy': 0.8274209012464045} |
| 0.0127 | 446.0 | 149410 | 1.5216 | {'f1': 0.8762151652624757} | {'accuracy': 0.8168744007670182} |
| 0.0072 | 447.0 | 149745 | 1.5487 | {'f1': 0.8791208791208791} | {'accuracy': 0.8207094918504314} |
| 0.0076 | 448.0 | 150080 | 1.5529 | {'f1': 0.8808290155440415} | {'accuracy': 0.8235858101629914} |
| 0.0076 | 449.0 | 150415 | 1.5916 | {'f1': 0.8782051282051283} | {'accuracy': 0.8178331735378715} |
| 0.007 | 450.0 | 150750 | 1.6348 | {'f1': 0.8778920308483289} | {'accuracy': 0.8178331735378715} |
| 0.0072 | 451.0 | 151085 | 1.4377 | {'f1': 0.8838120104438641} | {'accuracy': 0.8293384467881112} |
| 0.0072 | 452.0 | 151420 | 1.4952 | {'f1': 0.8802588996763754} | {'accuracy': 0.822627037392138} |
| 0.0077 | 453.0 | 151755 | 1.4855 | {'f1': 0.8839922229423202} | {'accuracy': 0.8283796740172579} |
| 0.0051 | 454.0 | 152090 | 1.5523 | {'f1': 0.8820116054158607} | {'accuracy': 0.8245445829338447} |
| 0.0051 | 455.0 | 152425 | 1.4575 | {'f1': 0.8835078534031414} | {'accuracy': 0.8293384467881112} |
| 0.0069 | 456.0 | 152760 | 1.5480 | {'f1': 0.8803088803088803} | {'accuracy': 0.8216682646212847} |
| 0.0082 | 457.0 | 153095 | 1.4397 | {'f1': 0.8883048620236531} | {'accuracy': 0.8370086289549377} |
| 0.0082 | 458.0 | 153430 | 1.5694 | {'f1': 0.880722114764668} | {'accuracy': 0.822627037392138} |
| 0.0096 | 459.0 | 153765 | 1.5194 | {'f1': 0.8790058862001309} | {'accuracy': 0.822627037392138} |
| 0.0097 | 460.0 | 154100 | 1.4786 | {'f1': 0.88296488946684} | {'accuracy': 0.8274209012464045} |
| 0.0097 | 461.0 | 154435 | 1.4634 | {'f1': 0.8798955613577023} | {'accuracy': 0.8235858101629914} |
| 0.009 | 462.0 | 154770 | 1.7132 | {'f1': 0.8779253636938646} | {'accuracy': 0.8149568552253116} |
| 0.0095 | 463.0 | 155105 | 1.5436 | {'f1': 0.8820512820512821} | {'accuracy': 0.8235858101629914} |
| 0.0095 | 464.0 | 155440 | 1.4633 | {'f1': 0.8842105263157894} | {'accuracy': 0.8312559923298178} |
| 0.0051 | 465.0 | 155775 | 1.5750 | {'f1': 0.8837209302325582} | {'accuracy': 0.8274209012464045} |
| 0.0081 | 466.0 | 156110 | 1.5039 | {'f1': 0.8845144356955381} | {'accuracy': 0.8312559923298178} |
| 0.0081 | 467.0 | 156445 | 1.5045 | {'f1': 0.8828125} | {'accuracy': 0.8274209012464045} |
| 0.0077 | 468.0 | 156780 | 1.4375 | {'f1': 0.8793671720500988} | {'accuracy': 0.8245445829338447} |
| 0.0057 | 469.0 | 157115 | 1.6179 | {'f1': 0.8784565916398713} | {'accuracy': 0.8187919463087249} |
| 0.0057 | 470.0 | 157450 | 1.6056 | {'f1': 0.881664499349805} | {'accuracy': 0.825503355704698} |
| 0.0054 | 471.0 | 157785 | 1.6304 | {'f1': 0.8796356538711776} | {'accuracy': 0.822627037392138} |
| 0.0084 | 472.0 | 158120 | 1.5060 | {'f1': 0.8789297658862876} | {'accuracy': 0.8264621284755513} |
| 0.0084 | 473.0 | 158455 | 1.5820 | {'f1': 0.881399870382372} | {'accuracy': 0.8245445829338447} |
| 0.0064 | 474.0 | 158790 | 1.5291 | {'f1': 0.8803641092327698} | {'accuracy': 0.8235858101629914} |
| 0.0091 | 475.0 | 159125 | 1.5703 | {'f1': 0.8784925276153347} | {'accuracy': 0.8207094918504314} |
| 0.0091 | 476.0 | 159460 | 1.4637 | {'f1': 0.882392026578073} | {'accuracy': 0.8302972195589645} |
| 0.0039 | 477.0 | 159795 | 1.5989 | {'f1': 0.8803641092327698} | {'accuracy': 0.8235858101629914} |
| 0.0112 | 478.0 | 160130 | 1.5197 | {'f1': 0.8809831824062095} | {'accuracy': 0.8235858101629914} |
| 0.0112 | 479.0 | 160465 | 1.4421 | {'f1': 0.8843892880470282} | {'accuracy': 0.8302972195589645} |
| 0.0092 | 480.0 | 160800 | 1.4686 | {'f1': 0.8852459016393444} | {'accuracy': 0.8322147651006712} |
| 0.0088 | 481.0 | 161135 | 1.4668 | {'f1': 0.8845401174168297} | {'accuracy': 0.8302972195589645} |
| 0.0088 | 482.0 | 161470 | 1.4334 | {'f1': 0.8832020997375328} | {'accuracy': 0.8293384467881112} |
| 0.0096 | 483.0 | 161805 | 1.5198 | {'f1': 0.8786502271252432} | {'accuracy': 0.8207094918504314} |
| 0.0047 | 484.0 | 162140 | 1.4443 | {'f1': 0.8850726552179656} | {'accuracy': 0.8331735378715245} |
| 0.0047 | 485.0 | 162475 | 1.5450 | {'f1': 0.8819714656290532} | {'accuracy': 0.825503355704698} |
| 0.0069 | 486.0 | 162810 | 1.4660 | {'f1': 0.8791639451338994} | {'accuracy': 0.822627037392138} |
| 0.0067 | 487.0 | 163145 | 1.5695 | {'f1': 0.8842921784098255} | {'accuracy': 0.8283796740172579} |
| 0.0067 | 488.0 | 163480 | 1.5924 | {'f1': 0.8825806451612904} | {'accuracy': 0.825503355704698} |
| 0.0055 | 489.0 | 163815 | 1.5764 | {'f1': 0.8822768434670116} | {'accuracy': 0.825503355704698} |
| 0.009 | 490.0 | 164150 | 1.5042 | {'f1': 0.881311475409836} | {'accuracy': 0.8264621284755513} |
| 0.009 | 491.0 | 164485 | 1.4637 | {'f1': 0.8821593153390389} | {'accuracy': 0.8283796740172579} |
| 0.0086 | 492.0 | 164820 | 1.5421 | {'f1': 0.8810916179337233} | {'accuracy': 0.8245445829338447} |
| 0.0066 | 493.0 | 165155 | 1.4550 | {'f1': 0.8821192052980134} | {'accuracy': 0.8293384467881112} |
| 0.0066 | 494.0 | 165490 | 1.5674 | {'f1': 0.875902823374918} | {'accuracy': 0.8187919463087249} |
| 0.0049 | 495.0 | 165825 | 1.6806 | {'f1': 0.8776699029126214} | {'accuracy': 0.8187919463087249} |
| 0.0029 | 496.0 | 166160 | 1.7344 | {'f1': 0.8764478764478765} | {'accuracy': 0.8159156279961649} |
| 0.0029 | 497.0 | 166495 | 1.6178 | {'f1': 0.8807339449541284} | {'accuracy': 0.825503355704698} |
| 0.0064 | 498.0 | 166830 | 1.7900 | {'f1': 0.8757921419518377} | {'accuracy': 0.8120805369127517} |
| 0.008 | 499.0 | 167165 | 1.7277 | {'f1': 0.8774855676715844} | {'accuracy': 0.8168744007670182} |
| 0.0059 | 500.0 | 167500 | 1.6116 | {'f1': 0.8785900783289817} | {'accuracy': 0.8216682646212847} |
| 0.0059 | 501.0 | 167835 | 1.6649 | {'f1': 0.8775773195876289} | {'accuracy': 0.8178331735378715} |
| 0.0075 | 502.0 | 168170 | 1.5932 | {'f1': 0.8761408083441983} | {'accuracy': 0.8178331735378715} |
| 0.0086 | 503.0 | 168505 | 1.5948 | {'f1': 0.8817766165904638} | {'accuracy': 0.8264621284755513} |
| 0.0086 | 504.0 | 168840 | 1.5424 | {'f1': 0.876984126984127} | {'accuracy': 0.8216682646212847} |
| 0.0063 | 505.0 | 169175 | 1.5306 | {'f1': 0.8818897637795275} | {'accuracy': 0.8274209012464045} |
| 0.0088 | 506.0 | 169510 | 1.5694 | {'f1': 0.881399870382372} | {'accuracy': 0.8245445829338447} |
| 0.0088 | 507.0 | 169845 | 1.5427 | {'f1': 0.8835078534031414} | {'accuracy': 0.8293384467881112} |
| 0.0035 | 508.0 | 170180 | 1.6553 | {'f1': 0.8796895213454075} | {'accuracy': 0.8216682646212847} |
| 0.007 | 509.0 | 170515 | 1.5628 | {'f1': 0.8791064388961892} | {'accuracy': 0.8235858101629914} |
| 0.007 | 510.0 | 170850 | 1.5664 | {'f1': 0.8793215916503588} | {'accuracy': 0.822627037392138} |
| 0.0038 | 511.0 | 171185 | 1.5796 | {'f1': 0.8790637191157347} | {'accuracy': 0.8216682646212847} |
| 0.006 | 512.0 | 171520 | 1.5388 | {'f1': 0.8765676567656766} | {'accuracy': 0.8207094918504314} |
| 0.006 | 513.0 | 171855 | 1.6644 | {'f1': 0.8790218790218791} | {'accuracy': 0.8197507190795782} |
| 0.0122 | 514.0 | 172190 | 1.5761 | {'f1': 0.8753280839895013} | {'accuracy': 0.8178331735378715} |
| 0.0076 | 515.0 | 172525 | 1.6813 | {'f1': 0.8817480719794344} | {'accuracy': 0.8235858101629914} |
| 0.0076 | 516.0 | 172860 | 1.5030 | {'f1': 0.8858085808580859} | {'accuracy': 0.8341323106423778} |
| 0.0059 | 517.0 | 173195 | 1.6525 | {'f1': 0.8787096774193548} | {'accuracy': 0.8197507190795782} |
| 0.0071 | 518.0 | 173530 | 1.6402 | {'f1': 0.8810916179337233} | {'accuracy': 0.8245445829338447} |
| 0.0071 | 519.0 | 173865 | 1.5771 | {'f1': 0.8836601307189542} | {'accuracy': 0.8293384467881112} |
| 0.0085 | 520.0 | 174200 | 1.6690 | {'f1': 0.8818181818181818} | {'accuracy': 0.825503355704698} |
| 0.0083 | 521.0 | 174535 | 1.5547 | {'f1': 0.885396201702685} | {'accuracy': 0.8322147651006712} |
| 0.0083 | 522.0 | 174870 | 1.4857 | {'f1': 0.8789473684210526} | {'accuracy': 0.8235858101629914} |
| 0.0124 | 523.0 | 175205 | 1.4382 | {'f1': 0.8816219751471549} | {'accuracy': 0.8264621284755513} |
| 0.0076 | 524.0 | 175540 | 1.6573 | {'f1': 0.8775510204081634} | {'accuracy': 0.8159156279961649} |
| 0.0076 | 525.0 | 175875 | 1.5538 | {'f1': 0.8786502271252432} | {'accuracy': 0.8207094918504314} |
| 0.0071 | 526.0 | 176210 | 1.6442 | {'f1': 0.876117496807152} | {'accuracy': 0.8139980824544583} |
| 0.0067 | 527.0 | 176545 | 1.6500 | {'f1': 0.880722114764668} | {'accuracy': 0.822627037392138} |
| 0.0067 | 528.0 | 176880 | 1.4884 | {'f1': 0.8823142669296515} | {'accuracy': 0.8283796740172579} |
| 0.007 | 529.0 | 177215 | 1.4984 | {'f1': 0.8823142669296515} | {'accuracy': 0.8283796740172579} |
| 0.0062 | 530.0 | 177550 | 1.6704 | {'f1': 0.8794326241134752} | {'accuracy': 0.8207094918504314} |
| 0.0062 | 531.0 | 177885 | 1.6022 | {'f1': 0.8764629388816645} | {'accuracy': 0.8178331735378715} |
| 0.0053 | 532.0 | 178220 | 1.5841 | {'f1': 0.8788075178224238} | {'accuracy': 0.8207094918504314} |
| 0.0071 | 533.0 | 178555 | 1.5986 | {'f1': 0.8795336787564768} | {'accuracy': 0.8216682646212847} |
| 0.0071 | 534.0 | 178890 | 1.5788 | {'f1': 0.8770331815224464} | {'accuracy': 0.8187919463087249} |
| 0.0082 | 535.0 | 179225 | 1.5415 | {'f1': 0.8794233289646134} | {'accuracy': 0.8235858101629914} |
| 0.0067 | 536.0 | 179560 | 1.6323 | {'f1': 0.8800000000000001} | {'accuracy': 0.8216682646212847} |
| 0.0067 | 537.0 | 179895 | 1.5763 | {'f1': 0.87890625} | {'accuracy': 0.8216682646212847} |
| 0.007 | 538.0 | 180230 | 1.6408 | {'f1': 0.8778280542986426} | {'accuracy': 0.8187919463087249} |
| 0.0062 | 539.0 | 180565 | 1.5163 | {'f1': 0.8841826604897418} | {'accuracy': 0.8322147651006712} |
| 0.0062 | 540.0 | 180900 | 1.5985 | {'f1': 0.8783958602846055} | {'accuracy': 0.8197507190795782} |
| 0.0069 | 541.0 | 181235 | 1.5664 | {'f1': 0.8802588996763754} | {'accuracy': 0.822627037392138} |
| 0.0049 | 542.0 | 181570 | 1.7383 | {'f1': 0.875875238701464} | {'accuracy': 0.8130393096836049} |
| 0.0049 | 543.0 | 181905 | 1.5475 | {'f1': 0.8806262230919765} | {'accuracy': 0.8245445829338447} |
| 0.0051 | 544.0 | 182240 | 1.6891 | {'f1': 0.8818998716302953} | {'accuracy': 0.8235858101629914} |
| 0.0041 | 545.0 | 182575 | 1.5626 | {'f1': 0.8826597131681878} | {'accuracy': 0.8274209012464045} |
| 0.0041 | 546.0 | 182910 | 1.6650 | {'f1': 0.8801546391752577} | {'accuracy': 0.8216682646212847} |
| 0.0046 | 547.0 | 183245 | 1.6727 | {'f1': 0.880722114764668} | {'accuracy': 0.822627037392138} |
| 0.0067 | 548.0 | 183580 | 1.5603 | {'f1': 0.8784313725490197} | {'accuracy': 0.8216682646212847} |
| 0.0067 | 549.0 | 183915 | 1.6787 | {'f1': 0.881859264041317} | {'accuracy': 0.8245445829338447} |
| 0.0035 | 550.0 | 184250 | 1.5408 | {'f1': 0.8822381262199089} | {'accuracy': 0.8264621284755513} |
| 0.0097 | 551.0 | 184585 | 1.5098 | {'f1': 0.8809368900455433} | {'accuracy': 0.8245445829338447} |
| 0.0097 | 552.0 | 184920 | 1.5416 | {'f1': 0.8806262230919765} | {'accuracy': 0.8245445829338447} |
| 0.0064 | 553.0 | 185255 | 1.7427 | {'f1': 0.8736507936507937} | {'accuracy': 0.8092042186001918} |
| 0.0083 | 554.0 | 185590 | 1.4818 | {'f1': 0.8797385620915033} | {'accuracy': 0.8235858101629914} |
| 0.0083 | 555.0 | 185925 | 1.5257 | {'f1': 0.8821243523316061} | {'accuracy': 0.825503355704698} |
| 0.0067 | 556.0 | 186260 | 1.4460 | {'f1': 0.8837820091923834} | {'accuracy': 0.8302972195589645} |
| 0.0083 | 557.0 | 186595 | 1.5127 | {'f1': 0.8786885245901639} | {'accuracy': 0.822627037392138} |
| 0.0083 | 558.0 | 186930 | 1.6163 | {'f1': 0.8802588996763754} | {'accuracy': 0.822627037392138} |
| 0.0047 | 559.0 | 187265 | 1.5008 | {'f1': 0.8865435356200528} | {'accuracy': 0.835091083413231} |
| 0.0053 | 560.0 | 187600 | 1.5135 | {'f1': 0.8858085808580859} | {'accuracy': 0.8341323106423778} |
| 0.0053 | 561.0 | 187935 | 1.5165 | {'f1': 0.8821989528795812} | {'accuracy': 0.8274209012464045} |
| 0.0087 | 562.0 | 188270 | 1.5569 | {'f1': 0.8821243523316061} | {'accuracy': 0.825503355704698} |
| 0.007 | 563.0 | 188605 | 1.4532 | {'f1': 0.8821593153390389} | {'accuracy': 0.8283796740172579} |
| 0.007 | 564.0 | 188940 | 1.4890 | {'f1': 0.8850726552179656} | {'accuracy': 0.8331735378715245} |
| 0.004 | 565.0 | 189275 | 1.5723 | {'f1': 0.8861418347430059} | {'accuracy': 0.8322147651006712} |
| 0.0075 | 566.0 | 189610 | 1.5125 | {'f1': 0.8858267716535433} | {'accuracy': 0.8331735378715245} |
| 0.0075 | 567.0 | 189945 | 1.6625 | {'f1': 0.8802049967969251} | {'accuracy': 0.8207094918504314} |
| 0.0049 | 568.0 | 190280 | 1.5940 | {'f1': 0.883419689119171} | {'accuracy': 0.8274209012464045} |
| 0.006 | 569.0 | 190615 | 1.6085 | {'f1': 0.8822768434670116} | {'accuracy': 0.825503355704698} |
| 0.006 | 570.0 | 190950 | 1.4933 | {'f1': 0.8833551769331585} | {'accuracy': 0.8293384467881112} |
| 0.0047 | 571.0 | 191285 | 1.5427 | {'f1': 0.8878688524590164} | {'accuracy': 0.8360498561840843} |
| 0.0066 | 572.0 | 191620 | 1.6111 | {'f1': 0.8827319587628866} | {'accuracy': 0.825503355704698} |
| 0.0066 | 573.0 | 191955 | 1.4664 | {'f1': 0.8866930171277998} | {'accuracy': 0.835091083413231} |
| 0.0069 | 574.0 | 192290 | 1.5405 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0085 | 575.0 | 192625 | 1.5603 | {'f1': 0.881201044386423} | {'accuracy': 0.825503355704698} |
| 0.0085 | 576.0 | 192960 | 1.6009 | {'f1': 0.8825438027255029} | {'accuracy': 0.8264621284755513} |
| 0.0044 | 577.0 | 193295 | 1.5765 | {'f1': 0.8825065274151437} | {'accuracy': 0.8274209012464045} |
| 0.006 | 578.0 | 193630 | 1.6543 | {'f1': 0.8817065287653523} | {'accuracy': 0.8245445829338447} |
| 0.006 | 579.0 | 193965 | 1.6021 | {'f1': 0.8802588996763754} | {'accuracy': 0.822627037392138} |
| 0.0066 | 580.0 | 194300 | 1.6881 | {'f1': 0.8773946360153257} | {'accuracy': 0.8159156279961649} |
| 0.0076 | 581.0 | 194635 | 1.4859 | {'f1': 0.886829913964262} | {'accuracy': 0.8360498561840843} |
| 0.0076 | 582.0 | 194970 | 1.4807 | {'f1': 0.8836291913214991} | {'accuracy': 0.8302972195589645} |
| 0.0081 | 583.0 | 195305 | 1.5842 | {'f1': 0.8794326241134752} | {'accuracy': 0.8207094918504314} |
| 0.0088 | 584.0 | 195640 | 1.5026 | {'f1': 0.8851174934725848} | {'accuracy': 0.8312559923298178} |
| 0.0088 | 585.0 | 195975 | 1.5329 | {'f1': 0.8872964169381107} | {'accuracy': 0.8341323106423778} |
| 0.0022 | 586.0 | 196310 | 1.7231 | {'f1': 0.8787301587301588} | {'accuracy': 0.8168744007670182} |
| 0.0097 | 587.0 | 196645 | 1.5368 | {'f1': 0.8837209302325582} | {'accuracy': 0.8274209012464045} |
| 0.0097 | 588.0 | 196980 | 1.6242 | {'f1': 0.8833010960670536} | {'accuracy': 0.8264621284755513} |
| 0.0028 | 589.0 | 197315 | 1.5300 | {'f1': 0.8875816993464052} | {'accuracy': 0.835091083413231} |
| 0.0033 | 590.0 | 197650 | 1.6450 | {'f1': 0.8793324775353017} | {'accuracy': 0.8197507190795782} |
| 0.0033 | 591.0 | 197985 | 1.5715 | {'f1': 0.8793324775353017} | {'accuracy': 0.8197507190795782} |
| 0.0089 | 592.0 | 198320 | 1.5802 | {'f1': 0.8809368900455433} | {'accuracy': 0.8245445829338447} |
| 0.0024 | 593.0 | 198655 | 1.5816 | {'f1': 0.8771929824561404} | {'accuracy': 0.8187919463087249} |
| 0.0024 | 594.0 | 198990 | 1.5100 | {'f1': 0.8836291913214991} | {'accuracy': 0.8302972195589645} |
| 0.0066 | 595.0 | 199325 | 1.5332 | {'f1': 0.8897637795275591} | {'accuracy': 0.8389261744966443} |
| 0.0025 | 596.0 | 199660 | 1.5607 | {'f1': 0.8856767411300921} | {'accuracy': 0.8331735378715245} |
| 0.0025 | 597.0 | 199995 | 1.6592 | {'f1': 0.8820116054158607} | {'accuracy': 0.8245445829338447} |
| 0.0054 | 598.0 | 200330 | 1.4389 | {'f1': 0.8843626806833115} | {'accuracy': 0.8312559923298178} |
| 0.0085 | 599.0 | 200665 | 1.5571 | {'f1': 0.8821243523316061} | {'accuracy': 0.825503355704698} |
| 0.0056 | 600.0 | 201000 | 1.5473 | {'f1': 0.88671875} | {'accuracy': 0.8331735378715245} |
| 0.0056 | 601.0 | 201335 | 1.5260 | {'f1': 0.8840579710144927} | {'accuracy': 0.8312559923298178} |
| 0.0028 | 602.0 | 201670 | 1.6108 | {'f1': 0.8852672750977836} | {'accuracy': 0.8312559923298178} |
| 0.0043 | 603.0 | 202005 | 1.6212 | {'f1': 0.8830829523187459} | {'accuracy': 0.8283796740172579} |
| 0.0043 | 604.0 | 202340 | 1.5834 | {'f1': 0.8829300196206671} | {'accuracy': 0.8283796740172579} |
| 0.0061 | 605.0 | 202675 | 1.6756 | {'f1': 0.8818181818181818} | {'accuracy': 0.825503355704698} |
| 0.003 | 606.0 | 203010 | 1.7325 | {'f1': 0.8796895213454075} | {'accuracy': 0.8216682646212847} |
| 0.003 | 607.0 | 203345 | 1.5822 | {'f1': 0.8846153846153847} | {'accuracy': 0.8331735378715245} |
| 0.0041 | 608.0 | 203680 | 1.5585 | {'f1': 0.8856382978723404} | {'accuracy': 0.835091083413231} |
| 0.0047 | 609.0 | 204015 | 1.7206 | {'f1': 0.8809831824062095} | {'accuracy': 0.8235858101629914} |
| 0.0047 | 610.0 | 204350 | 1.6442 | {'f1': 0.8809368900455433} | {'accuracy': 0.8245445829338447} |
| 0.0041 | 611.0 | 204685 | 1.6960 | {'f1': 0.88296488946684} | {'accuracy': 0.8274209012464045} |
| 0.0052 | 612.0 | 205020 | 1.6414 | {'f1': 0.8836291913214991} | {'accuracy': 0.8302972195589645} |
| 0.0052 | 613.0 | 205355 | 1.5995 | {'f1': 0.8820445609436436} | {'accuracy': 0.8274209012464045} |
| 0.0079 | 614.0 | 205690 | 1.5441 | {'f1': 0.8846657929226737} | {'accuracy': 0.8312559923298178} |
| 0.0067 | 615.0 | 206025 | 1.5942 | {'f1': 0.8852672750977836} | {'accuracy': 0.8312559923298178} |
| 0.0067 | 616.0 | 206360 | 1.7989 | {'f1': 0.8775510204081634} | {'accuracy': 0.8159156279961649} |
| 0.0037 | 617.0 | 206695 | 1.5977 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0084 | 618.0 | 207030 | 1.5557 | {'f1': 0.8829993535875889} | {'accuracy': 0.8264621284755513} |
| 0.0084 | 619.0 | 207365 | 1.4929 | {'f1': 0.8815789473684211} | {'accuracy': 0.8274209012464045} |
| 0.0079 | 620.0 | 207700 | 1.6609 | {'f1': 0.8778920308483289} | {'accuracy': 0.8178331735378715} |
| 0.006 | 621.0 | 208035 | 1.6334 | {'f1': 0.880103694102398} | {'accuracy': 0.822627037392138} |
| 0.006 | 622.0 | 208370 | 1.5635 | {'f1': 0.8832354859752121} | {'accuracy': 0.8283796740172579} |
| 0.0053 | 623.0 | 208705 | 1.5781 | {'f1': 0.8842380640941793} | {'accuracy': 0.8302972195589645} |
| 0.004 | 624.0 | 209040 | 1.6173 | {'f1': 0.8819308545335943} | {'accuracy': 0.8264621284755513} |
| 0.004 | 625.0 | 209375 | 1.4919 | {'f1': 0.8837209302325582} | {'accuracy': 0.8322147651006712} |
| 0.0044 | 626.0 | 209710 | 1.6378 | {'f1': 0.8780804150453955} | {'accuracy': 0.8197507190795782} |
| 0.0051 | 627.0 | 210045 | 1.5833 | {'f1': 0.8797385620915033} | {'accuracy': 0.8235858101629914} |
| 0.0051 | 628.0 | 210380 | 1.5721 | {'f1': 0.8818897637795275} | {'accuracy': 0.8274209012464045} |
| 0.0031 | 629.0 | 210715 | 1.6291 | {'f1': 0.8784925276153347} | {'accuracy': 0.8207094918504314} |
| 0.0047 | 630.0 | 211050 | 1.6298 | {'f1': 0.8760545100584036} | {'accuracy': 0.8168744007670182} |
| 0.0047 | 631.0 | 211385 | 1.7736 | {'f1': 0.8745198463508322} | {'accuracy': 0.8120805369127517} |
| 0.003 | 632.0 | 211720 | 1.5395 | {'f1': 0.877284595300261} | {'accuracy': 0.8197507190795782} |
| 0.0076 | 633.0 | 212055 | 1.6973 | {'f1': 0.876765083440308} | {'accuracy': 0.8159156279961649} |
| 0.0076 | 634.0 | 212390 | 1.7115 | {'f1': 0.8766773162939298} | {'accuracy': 0.8149568552253116} |
| 0.0056 | 635.0 | 212725 | 1.5547 | {'f1': 0.8761408083441983} | {'accuracy': 0.8178331735378715} |
| 0.0053 | 636.0 | 213060 | 1.6427 | {'f1': 0.874025974025974} | {'accuracy': 0.8139980824544583} |
| 0.0053 | 637.0 | 213395 | 1.5879 | {'f1': 0.8758169934640523} | {'accuracy': 0.8178331735378715} |
| 0.0057 | 638.0 | 213730 | 1.6224 | {'f1': 0.877124183006536} | {'accuracy': 0.8197507190795782} |
| 0.0034 | 639.0 | 214065 | 1.6183 | {'f1': 0.8822381262199089} | {'accuracy': 0.8264621284755513} |
| 0.0034 | 640.0 | 214400 | 1.5877 | {'f1': 0.8855263157894737} | {'accuracy': 0.8331735378715245} |
| 0.0089 | 641.0 | 214735 | 1.6350 | {'f1': 0.8784565916398713} | {'accuracy': 0.8187919463087249} |
| 0.0038 | 642.0 | 215070 | 1.6359 | {'f1': 0.8805194805194805} | {'accuracy': 0.8235858101629914} |
| 0.0038 | 643.0 | 215405 | 1.6574 | {'f1': 0.8774193548387096} | {'accuracy': 0.8178331735378715} |
| 0.008 | 644.0 | 215740 | 1.4812 | {'f1': 0.885396201702685} | {'accuracy': 0.8322147651006712} |
| 0.0077 | 645.0 | 216075 | 1.4037 | {'f1': 0.8903566710700133} | {'accuracy': 0.840843720038351} |
| 0.0077 | 646.0 | 216410 | 1.4159 | {'f1': 0.8861092824226466} | {'accuracy': 0.8341323106423778} |
| 0.0034 | 647.0 | 216745 | 1.4374 | {'f1': 0.8847682119205298} | {'accuracy': 0.8331735378715245} |
| 0.0051 | 648.0 | 217080 | 1.4659 | {'f1': 0.8880105401844532} | {'accuracy': 0.8370086289549377} |
| 0.0051 | 649.0 | 217415 | 1.5300 | {'f1': 0.8836601307189542} | {'accuracy': 0.8293384467881112} |
| 0.0041 | 650.0 | 217750 | 1.4884 | {'f1': 0.8855263157894737} | {'accuracy': 0.8331735378715245} |
| 0.0048 | 651.0 | 218085 | 1.5347 | {'f1': 0.8851174934725848} | {'accuracy': 0.8312559923298178} |
| 0.0048 | 652.0 | 218420 | 1.5155 | {'f1': 0.8861256544502618} | {'accuracy': 0.8331735378715245} |
| 0.0051 | 653.0 | 218755 | 1.5072 | {'f1': 0.8852459016393444} | {'accuracy': 0.8322147651006712} |
| 0.0066 | 654.0 | 219090 | 1.5471 | {'f1': 0.8841423948220065} | {'accuracy': 0.8283796740172579} |
| 0.0066 | 655.0 | 219425 | 1.6866 | {'f1': 0.8786127167630058} | {'accuracy': 0.8187919463087249} |
| 0.0028 | 656.0 | 219760 | 1.5781 | {'f1': 0.8847435043304464} | {'accuracy': 0.8341323106423778} |
| 0.0051 | 657.0 | 220095 | 1.6018 | {'f1': 0.8841423948220065} | {'accuracy': 0.8283796740172579} |
| 0.0051 | 658.0 | 220430 | 1.6007 | {'f1': 0.8848167539267016} | {'accuracy': 0.8312559923298178} |
| 0.0034 | 659.0 | 220765 | 1.6493 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0035 | 660.0 | 221100 | 1.6292 | {'f1': 0.8790058862001309} | {'accuracy': 0.822627037392138} |
| 0.0035 | 661.0 | 221435 | 1.6268 | {'f1': 0.8816920026437541} | {'accuracy': 0.8283796740172579} |
| 0.0043 | 662.0 | 221770 | 1.7202 | {'f1': 0.8806161745827985} | {'accuracy': 0.8216682646212847} |
| 0.0044 | 663.0 | 222105 | 1.7276 | {'f1': 0.8800513149454778} | {'accuracy': 0.8207094918504314} |
| 0.0044 | 664.0 | 222440 | 1.6725 | {'f1': 0.883419689119171} | {'accuracy': 0.8274209012464045} |
| 0.0045 | 665.0 | 222775 | 1.5903 | {'f1': 0.8805280528052805} | {'accuracy': 0.8264621284755513} |
| 0.0033 | 666.0 | 223110 | 1.6448 | {'f1': 0.881311475409836} | {'accuracy': 0.8264621284755513} |
| 0.0033 | 667.0 | 223445 | 1.6078 | {'f1': 0.8812664907651715} | {'accuracy': 0.8274209012464045} |
| 0.0052 | 668.0 | 223780 | 1.8698 | {'f1': 0.8759493670886076} | {'accuracy': 0.8120805369127517} |
| 0.0056 | 669.0 | 224115 | 1.6594 | {'f1': 0.8777633289986997} | {'accuracy': 0.8197507190795782} |
| 0.0056 | 670.0 | 224450 | 1.7666 | {'f1': 0.8767471410419314} | {'accuracy': 0.8139980824544583} |
| 0.0048 | 671.0 | 224785 | 1.5940 | {'f1': 0.883289124668435} | {'accuracy': 0.8312559923298178} |
| 0.0044 | 672.0 | 225120 | 1.6732 | {'f1': 0.881664499349805} | {'accuracy': 0.825503355704698} |
| 0.0044 | 673.0 | 225455 | 1.5838 | {'f1': 0.88433575677462} | {'accuracy': 0.8322147651006712} |
| 0.0037 | 674.0 | 225790 | 1.7817 | {'f1': 0.8753180661577609} | {'accuracy': 0.8120805369127517} |
| 0.0058 | 675.0 | 226125 | 1.6803 | {'f1': 0.8810289389067525} | {'accuracy': 0.822627037392138} |
| 0.0058 | 676.0 | 226460 | 1.6020 | {'f1': 0.8833876221498371} | {'accuracy': 0.8283796740172579} |
| 0.0046 | 677.0 | 226795 | 1.6947 | {'f1': 0.8815958815958816} | {'accuracy': 0.8235858101629914} |
| 0.0015 | 678.0 | 227130 | 1.6027 | {'f1': 0.8814229249011858} | {'accuracy': 0.8274209012464045} |
| 0.0015 | 679.0 | 227465 | 1.6842 | {'f1': 0.8809368900455433} | {'accuracy': 0.8245445829338447} |
| 0.0035 | 680.0 | 227800 | 1.7749 | {'f1': 0.8734015345268541} | {'accuracy': 0.8101629913710451} |
| 0.0071 | 681.0 | 228135 | 1.6325 | {'f1': 0.8792650918635171} | {'accuracy': 0.8235858101629914} |
| 0.0071 | 682.0 | 228470 | 1.6693 | {'f1': 0.8817766165904638} | {'accuracy': 0.8264621284755513} |
| 0.0031 | 683.0 | 228805 | 1.7287 | {'f1': 0.8809368900455433} | {'accuracy': 0.8245445829338447} |
| 0.0051 | 684.0 | 229140 | 1.6958 | {'f1': 0.8803641092327698} | {'accuracy': 0.8235858101629914} |
| 0.0051 | 685.0 | 229475 | 1.7540 | {'f1': 0.8760436737315349} | {'accuracy': 0.8149568552253116} |
| 0.0033 | 686.0 | 229810 | 1.6853 | {'f1': 0.8787096774193548} | {'accuracy': 0.8197507190795782} |
| 0.009 | 687.0 | 230145 | 1.5281 | {'f1': 0.8827037773359842} | {'accuracy': 0.8302972195589645} |
| 0.009 | 688.0 | 230480 | 1.6425 | {'f1': 0.8807817589576548} | {'accuracy': 0.8245445829338447} |
| 0.005 | 689.0 | 230815 | 1.6769 | {'f1': 0.8823911630929175} | {'accuracy': 0.8264621284755513} |
| 0.0035 | 690.0 | 231150 | 1.7624 | {'f1': 0.8817065287653523} | {'accuracy': 0.8245445829338447} |
| 0.0035 | 691.0 | 231485 | 1.7393 | {'f1': 0.8798449612403101} | {'accuracy': 0.8216682646212847} |
| 0.0048 | 692.0 | 231820 | 1.7047 | {'f1': 0.8832354859752121} | {'accuracy': 0.8283796740172579} |
| 0.0027 | 693.0 | 232155 | 1.6658 | {'f1': 0.8846657929226737} | {'accuracy': 0.8312559923298178} |
| 0.0027 | 694.0 | 232490 | 1.8153 | {'f1': 0.8800513149454778} | {'accuracy': 0.8207094918504314} |
| 0.0027 | 695.0 | 232825 | 1.6971 | {'f1': 0.8807817589576548} | {'accuracy': 0.8245445829338447} |
| 0.006 | 696.0 | 233160 | 1.7047 | {'f1': 0.8825806451612904} | {'accuracy': 0.825503355704698} |
| 0.006 | 697.0 | 233495 | 1.7104 | {'f1': 0.8778280542986426} | {'accuracy': 0.8187919463087249} |
| 0.0051 | 698.0 | 233830 | 1.7271 | {'f1': 0.8773281952472703} | {'accuracy': 0.8168744007670182} |
| 0.0058 | 699.0 | 234165 | 1.5991 | {'f1': 0.8835078534031414} | {'accuracy': 0.8293384467881112} |
| 0.0041 | 700.0 | 234500 | 1.5880 | {'f1': 0.8871287128712871} | {'accuracy': 0.8360498561840843} |
| 0.0041 | 701.0 | 234835 | 1.6696 | {'f1': 0.8830829523187459} | {'accuracy': 0.8283796740172579} |
| 0.0054 | 702.0 | 235170 | 1.5805 | {'f1': 0.8834759710335748} | {'accuracy': 0.8302972195589645} |
| 0.0048 | 703.0 | 235505 | 1.7371 | {'f1': 0.8805681084570691} | {'accuracy': 0.822627037392138} |
| 0.0048 | 704.0 | 235840 | 1.6335 | {'f1': 0.8809831824062095} | {'accuracy': 0.8235858101629914} |
| 0.008 | 705.0 | 236175 | 1.5429 | {'f1': 0.8880157170923381} | {'accuracy': 0.8360498561840843} |
| 0.0072 | 706.0 | 236510 | 1.5920 | {'f1': 0.8799480856586632} | {'accuracy': 0.822627037392138} |
| 0.0072 | 707.0 | 236845 | 1.6666 | {'f1': 0.8783000643915004} | {'accuracy': 0.8187919463087249} |
| 0.005 | 708.0 | 237180 | 1.5701 | {'f1': 0.8830829523187459} | {'accuracy': 0.8283796740172579} |
| 0.0024 | 709.0 | 237515 | 1.5638 | {'f1': 0.8824688115561392} | {'accuracy': 0.8283796740172579} |
| 0.0024 | 710.0 | 237850 | 1.6545 | {'f1': 0.8796356538711776} | {'accuracy': 0.822627037392138} |
| 0.0022 | 711.0 | 238185 | 1.5872 | {'f1': 0.8821593153390389} | {'accuracy': 0.8283796740172579} |
| 0.0063 | 712.0 | 238520 | 1.4602 | {'f1': 0.8839704896042924} | {'accuracy': 0.8341323106423778} |
| 0.0063 | 713.0 | 238855 | 1.7557 | {'f1': 0.8793324775353017} | {'accuracy': 0.8197507190795782} |
| 0.0051 | 714.0 | 239190 | 1.5571 | {'f1': 0.8828125} | {'accuracy': 0.8274209012464045} |
| 0.0071 | 715.0 | 239525 | 1.5071 | {'f1': 0.8847926267281107} | {'accuracy': 0.8322147651006712} |
| 0.0071 | 716.0 | 239860 | 1.5371 | {'f1': 0.8842380640941793} | {'accuracy': 0.8302972195589645} |
| 0.0032 | 717.0 | 240195 | 1.6400 | {'f1': 0.8809831824062095} | {'accuracy': 0.8235858101629914} |
| 0.0039 | 718.0 | 240530 | 1.5848 | {'f1': 0.8836601307189542} | {'accuracy': 0.8293384467881112} |
| 0.0039 | 719.0 | 240865 | 1.5865 | {'f1': 0.8877216021011162} | {'accuracy': 0.8360498561840843} |
| 0.0027 | 720.0 | 241200 | 1.5462 | {'f1': 0.8862115127175368} | {'accuracy': 0.8370086289549377} |
| 0.0039 | 721.0 | 241535 | 1.6690 | {'f1': 0.8858267716535433} | {'accuracy': 0.8331735378715245} |
| 0.0039 | 722.0 | 241870 | 1.7565 | {'f1': 0.8812459441920829} | {'accuracy': 0.8245445829338447} |
| 0.0024 | 723.0 | 242205 | 1.6705 | {'f1': 0.8844884488448844} | {'accuracy': 0.8322147651006712} |
| 0.0058 | 724.0 | 242540 | 1.7562 | {'f1': 0.8804137039431158} | {'accuracy': 0.822627037392138} |
| 0.0058 | 725.0 | 242875 | 1.7151 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0046 | 726.0 | 243210 | 1.7525 | {'f1': 0.8844155844155843} | {'accuracy': 0.8293384467881112} |
| 0.003 | 727.0 | 243545 | 1.7908 | {'f1': 0.8814432989690721} | {'accuracy': 0.8235858101629914} |
| 0.003 | 728.0 | 243880 | 1.7585 | {'f1': 0.8798972382787411} | {'accuracy': 0.8207094918504314} |
| 0.0078 | 729.0 | 244215 | 1.6461 | {'f1': 0.8808900523560209} | {'accuracy': 0.825503355704698} |
| 0.0055 | 730.0 | 244550 | 1.8172 | {'f1': 0.8753213367609254} | {'accuracy': 0.8139980824544583} |
| 0.0055 | 731.0 | 244885 | 1.7606 | {'f1': 0.8777991042866283} | {'accuracy': 0.8168744007670182} |
| 0.0069 | 732.0 | 245220 | 1.6387 | {'f1': 0.877284595300261} | {'accuracy': 0.8197507190795782} |
| 0.0035 | 733.0 | 245555 | 1.6955 | {'f1': 0.8787483702737942} | {'accuracy': 0.8216682646212847} |
| 0.0035 | 734.0 | 245890 | 1.5542 | {'f1': 0.8819628647214854} | {'accuracy': 0.8293384467881112} |
| 0.0062 | 735.0 | 246225 | 1.5837 | {'f1': 0.8787483702737942} | {'accuracy': 0.8216682646212847} |
| 0.0042 | 736.0 | 246560 | 1.5991 | {'f1': 0.881201044386423} | {'accuracy': 0.825503355704698} |
| 0.0042 | 737.0 | 246895 | 1.5077 | {'f1': 0.8878566688785668} | {'accuracy': 0.837967401725791} |
| 0.0051 | 738.0 | 247230 | 1.6261 | {'f1': 0.8832354859752121} | {'accuracy': 0.8283796740172579} |
| 0.0049 | 739.0 | 247565 | 1.7079 | {'f1': 0.8785529715762274} | {'accuracy': 0.8197507190795782} |
| 0.0049 | 740.0 | 247900 | 1.5703 | {'f1': 0.8843892880470282} | {'accuracy': 0.8302972195589645} |
| 0.0071 | 741.0 | 248235 | 1.5480 | {'f1': 0.8817766165904638} | {'accuracy': 0.8264621284755513} |
| 0.0043 | 742.0 | 248570 | 1.5021 | {'f1': 0.8823529411764706} | {'accuracy': 0.8274209012464045} |
| 0.0043 | 743.0 | 248905 | 1.6375 | {'f1': 0.8815533980582524} | {'accuracy': 0.8245445829338447} |
| 0.0023 | 744.0 | 249240 | 1.4291 | {'f1': 0.8847926267281107} | {'accuracy': 0.8322147651006712} |
| 0.0066 | 745.0 | 249575 | 1.5330 | {'f1': 0.8824289405684754} | {'accuracy': 0.825503355704698} |
| 0.0066 | 746.0 | 249910 | 1.4237 | {'f1': 0.8823529411764706} | {'accuracy': 0.8274209012464045} |
| 0.0067 | 747.0 | 250245 | 1.3874 | {'f1': 0.883289124668435} | {'accuracy': 0.8312559923298178} |
| 0.0034 | 748.0 | 250580 | 1.4532 | {'f1': 0.8837209302325582} | {'accuracy': 0.8322147651006712} |
| 0.0034 | 749.0 | 250915 | 1.4834 | {'f1': 0.881201044386423} | {'accuracy': 0.825503355704698} |
| 0.0034 | 750.0 | 251250 | 1.5196 | {'f1': 0.8796356538711776} | {'accuracy': 0.822627037392138} |
| 0.0034 | 751.0 | 251585 | 1.5726 | {'f1': 0.8819714656290532} | {'accuracy': 0.825503355704698} |
| 0.0034 | 752.0 | 251920 | 1.5491 | {'f1': 0.8801571709233792} | {'accuracy': 0.8245445829338447} |
| 0.0042 | 753.0 | 252255 | 1.4986 | {'f1': 0.8792079207920793} | {'accuracy': 0.8245445829338447} |
| 0.0045 | 754.0 | 252590 | 1.4673 | {'f1': 0.8822751322751322} | {'accuracy': 0.8293384467881112} |
| 0.0045 | 755.0 | 252925 | 1.5067 | {'f1': 0.8797886393659181} | {'accuracy': 0.825503355704698} |
| 0.0058 | 756.0 | 253260 | 1.6951 | {'f1': 0.8775773195876289} | {'accuracy': 0.8178331735378715} |
| 0.0052 | 757.0 | 253595 | 1.5915 | {'f1': 0.8787483702737942} | {'accuracy': 0.8216682646212847} |
| 0.0052 | 758.0 | 253930 | 1.5778 | {'f1': 0.8820039551746869} | {'accuracy': 0.8283796740172579} |
| 0.0032 | 759.0 | 254265 | 1.6840 | {'f1': 0.8763754045307444} | {'accuracy': 0.8168744007670182} |
| 0.0058 | 760.0 | 254600 | 1.5173 | {'f1': 0.881491344873502} | {'accuracy': 0.8293384467881112} |
| 0.0058 | 761.0 | 254935 | 1.5686 | {'f1': 0.8847926267281107} | {'accuracy': 0.8322147651006712} |
| 0.0033 | 762.0 | 255270 | 1.7293 | {'f1': 0.8760436737315349} | {'accuracy': 0.8149568552253116} |
| 0.0031 | 763.0 | 255605 | 1.7252 | {'f1': 0.876943005181347} | {'accuracy': 0.8178331735378715} |
| 0.0031 | 764.0 | 255940 | 1.7134 | {'f1': 0.8782383419689119} | {'accuracy': 0.8197507190795782} |
| 0.005 | 765.0 | 256275 | 1.6881 | {'f1': 0.8771021992238034} | {'accuracy': 0.8178331735378715} |
| 0.0037 | 766.0 | 256610 | 1.5824 | {'f1': 0.8814669286182057} | {'accuracy': 0.8264621284755513} |
| 0.0037 | 767.0 | 256945 | 1.6678 | {'f1': 0.8792207792207791} | {'accuracy': 0.8216682646212847} |
| 0.0032 | 768.0 | 257280 | 1.6520 | {'f1': 0.8811369509043928} | {'accuracy': 0.8235858101629914} |
| 0.0033 | 769.0 | 257615 | 1.5788 | {'f1': 0.8794788273615636} | {'accuracy': 0.822627037392138} |
| 0.0033 | 770.0 | 257950 | 1.7073 | {'f1': 0.8781431334622825} | {'accuracy': 0.8187919463087249} |
| 0.0064 | 771.0 | 258285 | 1.5705 | {'f1': 0.8825857519788918} | {'accuracy': 0.8293384467881112} |
| 0.0034 | 772.0 | 258620 | 1.7071 | {'f1': 0.8765352294764059} | {'accuracy': 0.8168744007670182} |
| 0.0034 | 773.0 | 258955 | 1.7116 | {'f1': 0.8775773195876289} | {'accuracy': 0.8178331735378715} |
| 0.0043 | 774.0 | 259290 | 1.6249 | {'f1': 0.877124183006536} | {'accuracy': 0.8197507190795782} |
| 0.0055 | 775.0 | 259625 | 1.5984 | {'f1': 0.8785900783289817} | {'accuracy': 0.8216682646212847} |
| 0.0055 | 776.0 | 259960 | 1.5935 | {'f1': 0.8800521512385918} | {'accuracy': 0.8235858101629914} |
| 0.0037 | 777.0 | 260295 | 1.5598 | {'f1': 0.8782093482554312} | {'accuracy': 0.822627037392138} |
| 0.0049 | 778.0 | 260630 | 1.5901 | {'f1': 0.8777923784494087} | {'accuracy': 0.8216682646212847} |
| 0.0049 | 779.0 | 260965 | 1.6492 | {'f1': 0.8767123287671234} | {'accuracy': 0.8187919463087249} |
| 0.0026 | 780.0 | 261300 | 1.7694 | {'f1': 0.8749198203976909} | {'accuracy': 0.8130393096836049} |
| 0.0054 | 781.0 | 261635 | 1.7149 | {'f1': 0.8748387096774194} | {'accuracy': 0.8139980824544583} |
| 0.0054 | 782.0 | 261970 | 1.6788 | {'f1': 0.875} | {'accuracy': 0.8159156279961649} |
| 0.0033 | 783.0 | 262305 | 1.7130 | {'f1': 0.8762151652624757} | {'accuracy': 0.8168744007670182} |
| 0.006 | 784.0 | 262640 | 1.5759 | {'f1': 0.8802117802779617} | {'accuracy': 0.8264621284755513} |
| 0.006 | 785.0 | 262975 | 1.6508 | {'f1': 0.8777633289986997} | {'accuracy': 0.8197507190795782} |
| 0.0031 | 786.0 | 263310 | 1.7313 | {'f1': 0.8755641521598968} | {'accuracy': 0.8149568552253116} |
| 0.0042 | 787.0 | 263645 | 1.6428 | {'f1': 0.8792650918635171} | {'accuracy': 0.8235858101629914} |
| 0.0042 | 788.0 | 263980 | 1.7532 | {'f1': 0.8788659793814433} | {'accuracy': 0.8197507190795782} |
| 0.0036 | 789.0 | 264315 | 1.7189 | {'f1': 0.8782383419689119} | {'accuracy': 0.8197507190795782} |
| 0.0048 | 790.0 | 264650 | 1.7035 | {'f1': 0.876943005181347} | {'accuracy': 0.8178331735378715} |
| 0.0048 | 791.0 | 264985 | 1.6092 | {'f1': 0.8788881535407015} | {'accuracy': 0.8245445829338447} |
| 0.0045 | 792.0 | 265320 | 1.6136 | {'f1': 0.8791500664010624} | {'accuracy': 0.825503355704698} |
| 0.0032 | 793.0 | 265655 | 1.6764 | {'f1': 0.8794233289646134} | {'accuracy': 0.8235858101629914} |
| 0.0032 | 794.0 | 265990 | 1.8102 | {'f1': 0.8770122343850613} | {'accuracy': 0.8168744007670182} |
| 0.0048 | 795.0 | 266325 | 1.7445 | {'f1': 0.8776041666666666} | {'accuracy': 0.8197507190795782} |
| 0.004 | 796.0 | 266660 | 1.7284 | {'f1': 0.8762278978388998} | {'accuracy': 0.8187919463087249} |
| 0.004 | 797.0 | 266995 | 1.8343 | {'f1': 0.8733850129198967} | {'accuracy': 0.8120805369127517} |
| 0.0039 | 798.0 | 267330 | 1.8077 | {'f1': 0.8711974110032363} | {'accuracy': 0.8092042186001918} |
| 0.0026 | 799.0 | 267665 | 1.8041 | {'f1': 0.8721609344581441} | {'accuracy': 0.8111217641418984} |
| 0.0032 | 800.0 | 268000 | 1.7025 | {'f1': 0.8786885245901639} | {'accuracy': 0.822627037392138} |
| 0.0032 | 801.0 | 268335 | 1.7495 | {'f1': 0.8763020833333333} | {'accuracy': 0.8178331735378715} |
| 0.0058 | 802.0 | 268670 | 1.7034 | {'f1': 0.8788474132285528} | {'accuracy': 0.822627037392138} |
| 0.0028 | 803.0 | 269005 | 1.7309 | {'f1': 0.8806262230919765} | {'accuracy': 0.8245445829338447} |
| 0.0028 | 804.0 | 269340 | 1.7222 | {'f1': 0.8797920727745288} | {'accuracy': 0.822627037392138} |
| 0.0033 | 805.0 | 269675 | 1.6105 | {'f1': 0.8781127129750983} | {'accuracy': 0.8216682646212847} |
| 0.0043 | 806.0 | 270010 | 1.6209 | {'f1': 0.8791064388961892} | {'accuracy': 0.8235858101629914} |
| 0.0043 | 807.0 | 270345 | 1.7457 | {'f1': 0.8800000000000001} | {'accuracy': 0.8216682646212847} |
| 0.0035 | 808.0 | 270680 | 1.8058 | {'f1': 0.8761290322580645} | {'accuracy': 0.8159156279961649} |
| 0.0027 | 809.0 | 271015 | 1.8286 | {'f1': 0.8741976893453146} | {'accuracy': 0.8120805369127517} |
| 0.0027 | 810.0 | 271350 | 1.7140 | {'f1': 0.8838120104438641} | {'accuracy': 0.8293384467881112} |
| 0.0029 | 811.0 | 271685 | 1.6997 | {'f1': 0.8820445609436436} | {'accuracy': 0.8274209012464045} |
| 0.004 | 812.0 | 272020 | 1.7859 | {'f1': 0.8779857972885732} | {'accuracy': 0.8187919463087249} |
| 0.004 | 813.0 | 272355 | 1.8436 | {'f1': 0.8741976893453146} | {'accuracy': 0.8120805369127517} |
| 0.0031 | 814.0 | 272690 | 1.7848 | {'f1': 0.8745148771021992} | {'accuracy': 0.8139980824544583} |
| 0.0043 | 815.0 | 273025 | 1.8718 | {'f1': 0.872680742162508} | {'accuracy': 0.8092042186001918} |
| 0.0043 | 816.0 | 273360 | 1.6951 | {'f1': 0.8808900523560209} | {'accuracy': 0.825503355704698} |
| 0.0034 | 817.0 | 273695 | 1.7584 | {'f1': 0.880103694102398} | {'accuracy': 0.822627037392138} |
| 0.003 | 818.0 | 274030 | 1.6783 | {'f1': 0.8821192052980134} | {'accuracy': 0.8293384467881112} |
| 0.003 | 819.0 | 274365 | 1.7632 | {'f1': 0.8792207792207791} | {'accuracy': 0.8216682646212847} |
| 0.0028 | 820.0 | 274700 | 1.6914 | {'f1': 0.8820039551746869} | {'accuracy': 0.8283796740172579} |
| 0.0064 | 821.0 | 275035 | 1.7082 | {'f1': 0.8768729641693811} | {'accuracy': 0.8187919463087249} |
| 0.0064 | 822.0 | 275370 | 1.6736 | {'f1': 0.88} | {'accuracy': 0.8245445829338447} |
| 0.004 | 823.0 | 275705 | 1.6481 | {'f1': 0.880842659644503} | {'accuracy': 0.8264621284755513} |
| 0.0041 | 824.0 | 276040 | 1.6841 | {'f1': 0.8806262230919765} | {'accuracy': 0.8245445829338447} |
| 0.0041 | 825.0 | 276375 | 1.7816 | {'f1': 0.8787096774193548} | {'accuracy': 0.8197507190795782} |
| 0.0028 | 826.0 | 276710 | 1.6381 | {'f1': 0.8811556139198949} | {'accuracy': 0.8264621284755513} |
| 0.0046 | 827.0 | 277045 | 1.6258 | {'f1': 0.8803139306736429} | {'accuracy': 0.8245445829338447} |
| 0.0046 | 828.0 | 277380 | 1.5245 | {'f1': 0.8837516512549538} | {'accuracy': 0.8312559923298178} |
| 0.0064 | 829.0 | 277715 | 1.6464 | {'f1': 0.8809368900455433} | {'accuracy': 0.8245445829338447} |
| 0.0013 | 830.0 | 278050 | 1.5779 | {'f1': 0.8834437086092717} | {'accuracy': 0.8312559923298178} |
| 0.0013 | 831.0 | 278385 | 1.7291 | {'f1': 0.8779857972885732} | {'accuracy': 0.8187919463087249} |
| 0.0035 | 832.0 | 278720 | 1.6297 | {'f1': 0.8817345597897503} | {'accuracy': 0.8274209012464045} |
| 0.0036 | 833.0 | 279055 | 1.5615 | {'f1': 0.8860927152317881} | {'accuracy': 0.835091083413231} |
| 0.0036 | 834.0 | 279390 | 1.7254 | {'f1': 0.8763613068545804} | {'accuracy': 0.8149568552253116} |
| 0.0066 | 835.0 | 279725 | 1.5162 | {'f1': 0.8843085106382979} | {'accuracy': 0.8331735378715245} |
| 0.0025 | 836.0 | 280060 | 1.5393 | {'f1': 0.8847682119205298} | {'accuracy': 0.8331735378715245} |
| 0.0025 | 837.0 | 280395 | 1.6011 | {'f1': 0.8825857519788918} | {'accuracy': 0.8293384467881112} |
| 0.0047 | 838.0 | 280730 | 1.7815 | {'f1': 0.8778920308483289} | {'accuracy': 0.8178331735378715} |
| 0.0069 | 839.0 | 281065 | 1.6868 | {'f1': 0.8778280542986426} | {'accuracy': 0.8187919463087249} |
| 0.0069 | 840.0 | 281400 | 1.6110 | {'f1': 0.8790058862001309} | {'accuracy': 0.822627037392138} |
| 0.0057 | 841.0 | 281735 | 1.7266 | {'f1': 0.8741123305358297} | {'accuracy': 0.8130393096836049} |
| 0.0023 | 842.0 | 282070 | 1.5071 | {'f1': 0.8852023888520238} | {'accuracy': 0.8341323106423778} |
| 0.0023 | 843.0 | 282405 | 1.6223 | {'f1': 0.8801571709233792} | {'accuracy': 0.8245445829338447} |
| 0.0032 | 844.0 | 282740 | 1.5283 | {'f1': 0.8787878787878788} | {'accuracy': 0.8235858101629914} |
| 0.0065 | 845.0 | 283075 | 1.6950 | {'f1': 0.8765352294764059} | {'accuracy': 0.8168744007670182} |
| 0.0065 | 846.0 | 283410 | 1.5313 | {'f1': 0.8794788273615636} | {'accuracy': 0.822627037392138} |
| 0.0059 | 847.0 | 283745 | 1.5719 | {'f1': 0.8818181818181818} | {'accuracy': 0.825503355704698} |
| 0.0023 | 848.0 | 284080 | 1.5461 | {'f1': 0.8833551769331585} | {'accuracy': 0.8293384467881112} |
| 0.0023 | 849.0 | 284415 | 1.6004 | {'f1': 0.88296488946684} | {'accuracy': 0.8274209012464045} |
| 0.0026 | 850.0 | 284750 | 1.6252 | {'f1': 0.8783958602846055} | {'accuracy': 0.8197507190795782} |
| 0.0073 | 851.0 | 285085 | 1.5237 | {'f1': 0.8790058862001309} | {'accuracy': 0.822627037392138} |
| 0.0073 | 852.0 | 285420 | 1.5458 | {'f1': 0.879526003949967} | {'accuracy': 0.8245445829338447} |
| 0.0037 | 853.0 | 285755 | 1.5439 | {'f1': 0.8796844181459565} | {'accuracy': 0.8245445829338447} |
| 0.0028 | 854.0 | 286090 | 1.6754 | {'f1': 0.8766946417043254} | {'accuracy': 0.8168744007670182} |
| 0.0028 | 855.0 | 286425 | 1.5802 | {'f1': 0.8765676567656766} | {'accuracy': 0.8207094918504314} |
| 0.0024 | 856.0 | 286760 | 1.5366 | {'f1': 0.879526003949967} | {'accuracy': 0.8245445829338447} |
| 0.0082 | 857.0 | 287095 | 1.5267 | {'f1': 0.8787276341948311} | {'accuracy': 0.8245445829338447} |
| 0.0082 | 858.0 | 287430 | 1.6491 | {'f1': 0.8794326241134752} | {'accuracy': 0.8207094918504314} |
| 0.0034 | 859.0 | 287765 | 1.5808 | {'f1': 0.8795811518324608} | {'accuracy': 0.8235858101629914} |
| 0.0062 | 860.0 | 288100 | 1.5517 | {'f1': 0.8780169602087411} | {'accuracy': 0.8207094918504314} |
| 0.0062 | 861.0 | 288435 | 1.5949 | {'f1': 0.8791208791208791} | {'accuracy': 0.8207094918504314} |
| 0.0018 | 862.0 | 288770 | 1.5440 | {'f1': 0.8823529411764706} | {'accuracy': 0.8274209012464045} |
| 0.0049 | 863.0 | 289105 | 1.4895 | {'f1': 0.8811556139198949} | {'accuracy': 0.8264621284755513} |
| 0.0049 | 864.0 | 289440 | 1.5049 | {'f1': 0.8863936591809775} | {'accuracy': 0.835091083413231} |
| 0.0026 | 865.0 | 289775 | 1.5587 | {'f1': 0.8808900523560209} | {'accuracy': 0.825503355704698} |
| 0.0039 | 866.0 | 290110 | 1.5543 | {'f1': 0.8818897637795275} | {'accuracy': 0.8274209012464045} |
| 0.0039 | 867.0 | 290445 | 1.6776 | {'f1': 0.8785529715762274} | {'accuracy': 0.8197507190795782} |
| 0.0038 | 868.0 | 290780 | 1.5554 | {'f1': 0.8816920026437541} | {'accuracy': 0.8283796740172579} |
| 0.0044 | 869.0 | 291115 | 1.6552 | {'f1': 0.8793774319066148} | {'accuracy': 0.8216682646212847} |
| 0.0044 | 870.0 | 291450 | 1.6860 | {'f1': 0.8754034861200773} | {'accuracy': 0.8149568552253116} |
| 0.0048 | 871.0 | 291785 | 1.5888 | {'f1': 0.8773770491803279} | {'accuracy': 0.8207094918504314} |
| 0.005 | 872.0 | 292120 | 1.7552 | {'f1': 0.8716956802063185} | {'accuracy': 0.8092042186001918} |
| 0.005 | 873.0 | 292455 | 1.6055 | {'f1': 0.8782093482554312} | {'accuracy': 0.822627037392138} |
| 0.0017 | 874.0 | 292790 | 1.6851 | {'f1': 0.8757319453480807} | {'accuracy': 0.8168744007670182} |
| 0.0019 | 875.0 | 293125 | 1.5263 | {'f1': 0.8859239492995331} | {'accuracy': 0.8360498561840843} |
| 0.0019 | 876.0 | 293460 | 1.8864 | {'f1': 0.8762755102040816} | {'accuracy': 0.8139980824544583} |
| 0.0046 | 877.0 | 293795 | 1.6726 | {'f1': 0.8762151652624757} | {'accuracy': 0.8168744007670182} |
| 0.0053 | 878.0 | 294130 | 1.6633 | {'f1': 0.8768729641693811} | {'accuracy': 0.8187919463087249} |
| 0.0053 | 879.0 | 294465 | 1.5332 | {'f1': 0.881201044386423} | {'accuracy': 0.825503355704698} |
| 0.0038 | 880.0 | 294800 | 1.5875 | {'f1': 0.8794788273615636} | {'accuracy': 0.822627037392138} |
| 0.0016 | 881.0 | 295135 | 1.5800 | {'f1': 0.8823529411764706} | {'accuracy': 0.8274209012464045} |
| 0.0016 | 882.0 | 295470 | 1.7071 | {'f1': 0.8783000643915004} | {'accuracy': 0.8187919463087249} |
| 0.0036 | 883.0 | 295805 | 1.5565 | {'f1': 0.8828590337524818} | {'accuracy': 0.8302972195589645} |
| 0.0029 | 884.0 | 296140 | 1.6237 | {'f1': 0.8787878787878788} | {'accuracy': 0.8235858101629914} |
| 0.0029 | 885.0 | 296475 | 1.6317 | {'f1': 0.8809993425378042} | {'accuracy': 0.8264621284755513} |
| 0.0023 | 886.0 | 296810 | 1.7371 | {'f1': 0.8825806451612904} | {'accuracy': 0.825503355704698} |
| 0.0042 | 887.0 | 297145 | 1.6704 | {'f1': 0.8825065274151437} | {'accuracy': 0.8274209012464045} |
| 0.0042 | 888.0 | 297480 | 1.6768 | {'f1': 0.8818181818181818} | {'accuracy': 0.825503355704698} |
| 0.0034 | 889.0 | 297815 | 1.5612 | {'f1': 0.8820786142571618} | {'accuracy': 0.8302972195589645} |
| 0.0032 | 890.0 | 298150 | 1.6058 | {'f1': 0.8811556139198949} | {'accuracy': 0.8264621284755513} |
| 0.0032 | 891.0 | 298485 | 1.5487 | {'f1': 0.8815789473684211} | {'accuracy': 0.8274209012464045} |
| 0.0042 | 892.0 | 298820 | 1.6238 | {'f1': 0.8808290155440415} | {'accuracy': 0.8235858101629914} |
| 0.005 | 893.0 | 299155 | 1.6006 | {'f1': 0.8826597131681878} | {'accuracy': 0.8274209012464045} |
| 0.005 | 894.0 | 299490 | 1.5974 | {'f1': 0.8840864440078585} | {'accuracy': 0.8302972195589645} |
| 0.0026 | 895.0 | 299825 | 1.5750 | {'f1': 0.8837820091923834} | {'accuracy': 0.8302972195589645} |
| 0.0029 | 896.0 | 300160 | 1.5631 | {'f1': 0.8874172185430463} | {'accuracy': 0.8370086289549377} |
| 0.0029 | 897.0 | 300495 | 1.4892 | {'f1': 0.8828590337524818} | {'accuracy': 0.8302972195589645} |
| 0.0078 | 898.0 | 300830 | 1.6689 | {'f1': 0.8818998716302953} | {'accuracy': 0.8235858101629914} |
| 0.0029 | 899.0 | 301165 | 1.6107 | {'f1': 0.880674448767834} | {'accuracy': 0.8235858101629914} |
| 0.0033 | 900.0 | 301500 | 1.5714 | {'f1': 0.8861256544502618} | {'accuracy': 0.8331735378715245} |
| 0.0033 | 901.0 | 301835 | 1.5784 | {'f1': 0.8825065274151437} | {'accuracy': 0.8274209012464045} |
| 0.0034 | 902.0 | 302170 | 1.6497 | {'f1': 0.8831504196255648} | {'accuracy': 0.8264621284755513} |
| 0.0018 | 903.0 | 302505 | 1.5811 | {'f1': 0.8874259381171825} | {'accuracy': 0.8360498561840843} |
| 0.0018 | 904.0 | 302840 | 1.6148 | {'f1': 0.884967320261438} | {'accuracy': 0.8312559923298178} |
| 0.0034 | 905.0 | 303175 | 1.6339 | {'f1': 0.8880105401844532} | {'accuracy': 0.8370086289549377} |
| 0.0024 | 906.0 | 303510 | 1.6064 | {'f1': 0.8850498338870432} | {'accuracy': 0.8341323106423778} |
| 0.0024 | 907.0 | 303845 | 1.6667 | {'f1': 0.8844155844155843} | {'accuracy': 0.8293384467881112} |
| 0.0042 | 908.0 | 304180 | 1.5906 | {'f1': 0.8868421052631579} | {'accuracy': 0.835091083413231} |
| 0.0044 | 909.0 | 304515 | 1.6430 | {'f1': 0.8805194805194805} | {'accuracy': 0.8235858101629914} |
| 0.0044 | 910.0 | 304850 | 1.5804 | {'f1': 0.8821989528795812} | {'accuracy': 0.8274209012464045} |
| 0.0054 | 911.0 | 305185 | 1.5657 | {'f1': 0.8858085808580859} | {'accuracy': 0.8341323106423778} |
| 0.0035 | 912.0 | 305520 | 1.6200 | {'f1': 0.8867059593975114} | {'accuracy': 0.8341323106423778} |
| 0.0035 | 913.0 | 305855 | 1.6597 | {'f1': 0.881664499349805} | {'accuracy': 0.825503355704698} |
| 0.0045 | 914.0 | 306190 | 1.5984 | {'f1': 0.883322346736981} | {'accuracy': 0.8302972195589645} |
| 0.0038 | 915.0 | 306525 | 1.6561 | {'f1': 0.8839634941329856} | {'accuracy': 0.8293384467881112} |
| 0.0038 | 916.0 | 306860 | 1.6429 | {'f1': 0.8805194805194805} | {'accuracy': 0.8235858101629914} |
| 0.0035 | 917.0 | 307195 | 1.5792 | {'f1': 0.8855263157894737} | {'accuracy': 0.8331735378715245} |
| 0.0035 | 918.0 | 307530 | 1.7385 | {'f1': 0.8787684413085312} | {'accuracy': 0.8187919463087249} |
| 0.0035 | 919.0 | 307865 | 1.6209 | {'f1': 0.8826597131681878} | {'accuracy': 0.8274209012464045} |
| 0.0019 | 920.0 | 308200 | 1.7804 | {'f1': 0.8798972382787411} | {'accuracy': 0.8207094918504314} |
| 0.0031 | 921.0 | 308535 | 1.6301 | {'f1': 0.8826229508196721} | {'accuracy': 0.8283796740172579} |
| 0.0031 | 922.0 | 308870 | 1.6976 | {'f1': 0.8823911630929175} | {'accuracy': 0.8264621284755513} |
| 0.0038 | 923.0 | 309205 | 1.6510 | {'f1': 0.882084690553746} | {'accuracy': 0.8264621284755513} |
| 0.0042 | 924.0 | 309540 | 1.5643 | {'f1': 0.8849206349206349} | {'accuracy': 0.8331735378715245} |
| 0.0042 | 925.0 | 309875 | 1.6576 | {'f1': 0.880103694102398} | {'accuracy': 0.822627037392138} |
| 0.0028 | 926.0 | 310210 | 1.5039 | {'f1': 0.8815612382234185} | {'accuracy': 0.8312559923298178} |
| 0.0062 | 927.0 | 310545 | 1.5374 | {'f1': 0.8850498338870432} | {'accuracy': 0.8341323106423778} |
| 0.0062 | 928.0 | 310880 | 1.5927 | {'f1': 0.8826229508196721} | {'accuracy': 0.8283796740172579} |
| 0.0044 | 929.0 | 311215 | 1.5802 | {'f1': 0.88433575677462} | {'accuracy': 0.8322147651006712} |
| 0.0027 | 930.0 | 311550 | 1.6385 | {'f1': 0.8822381262199089} | {'accuracy': 0.8264621284755513} |
| 0.0027 | 931.0 | 311885 | 1.6352 | {'f1': 0.8851174934725848} | {'accuracy': 0.8312559923298178} |
| 0.0019 | 932.0 | 312220 | 1.5997 | {'f1': 0.8833551769331585} | {'accuracy': 0.8293384467881112} |
| 0.0057 | 933.0 | 312555 | 1.7269 | {'f1': 0.8792332268370607} | {'accuracy': 0.8187919463087249} |
| 0.0057 | 934.0 | 312890 | 1.6056 | {'f1': 0.88296488946684} | {'accuracy': 0.8274209012464045} |
| 0.0041 | 935.0 | 313225 | 1.5573 | {'f1': 0.8805774278215224} | {'accuracy': 0.825503355704698} |
| 0.0047 | 936.0 | 313560 | 1.5695 | {'f1': 0.881201044386423} | {'accuracy': 0.825503355704698} |
| 0.0047 | 937.0 | 313895 | 1.5672 | {'f1': 0.8816219751471549} | {'accuracy': 0.8264621284755513} |
| 0.0032 | 938.0 | 314230 | 1.6133 | {'f1': 0.8823911630929175} | {'accuracy': 0.8264621284755513} |
| 0.0066 | 939.0 | 314565 | 1.6662 | {'f1': 0.8798972382787411} | {'accuracy': 0.8207094918504314} |
| 0.0066 | 940.0 | 314900 | 1.6603 | {'f1': 0.8811369509043928} | {'accuracy': 0.8235858101629914} |
| 0.0034 | 941.0 | 315235 | 1.6765 | {'f1': 0.8804627249357326} | {'accuracy': 0.8216682646212847} |
| 0.0027 | 942.0 | 315570 | 1.5801 | {'f1': 0.8804204993429697} | {'accuracy': 0.825503355704698} |
| 0.0027 | 943.0 | 315905 | 1.6363 | {'f1': 0.8809368900455433} | {'accuracy': 0.8245445829338447} |
| 0.005 | 944.0 | 316240 | 1.5752 | {'f1': 0.8828125} | {'accuracy': 0.8274209012464045} |
| 0.0034 | 945.0 | 316575 | 1.6556 | {'f1': 0.8809218950064021} | {'accuracy': 0.8216682646212847} |
| 0.0034 | 946.0 | 316910 | 1.5272 | {'f1': 0.8847926267281107} | {'accuracy': 0.8322147651006712} |
| 0.0036 | 947.0 | 317245 | 1.5728 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0043 | 948.0 | 317580 | 1.5613 | {'f1': 0.8834759710335748} | {'accuracy': 0.8302972195589645} |
| 0.0043 | 949.0 | 317915 | 1.7344 | {'f1': 0.8813341885824246} | {'accuracy': 0.822627037392138} |
| 0.0025 | 950.0 | 318250 | 1.6721 | {'f1': 0.8838416612589227} | {'accuracy': 0.8283796740172579} |
| 0.0023 | 951.0 | 318585 | 1.6155 | {'f1': 0.8806855636123929} | {'accuracy': 0.8264621284755513} |
| 0.0023 | 952.0 | 318920 | 1.6973 | {'f1': 0.8839634941329856} | {'accuracy': 0.8293384467881112} |
| 0.0025 | 953.0 | 319255 | 1.7128 | {'f1': 0.8852883992222943} | {'accuracy': 0.8302972195589645} |
| 0.0027 | 954.0 | 319590 | 1.6258 | {'f1': 0.882392026578073} | {'accuracy': 0.8302972195589645} |
| 0.0027 | 955.0 | 319925 | 1.6449 | {'f1': 0.8791064388961892} | {'accuracy': 0.8235858101629914} |
| 0.0041 | 956.0 | 320260 | 1.7144 | {'f1': 0.8838416612589227} | {'accuracy': 0.8283796740172579} |
| 0.0033 | 957.0 | 320595 | 1.6443 | {'f1': 0.8791064388961892} | {'accuracy': 0.8235858101629914} |
| 0.0033 | 958.0 | 320930 | 1.6851 | {'f1': 0.8835393623942747} | {'accuracy': 0.8283796740172579} |
| 0.0033 | 959.0 | 321265 | 1.6488 | {'f1': 0.8783694937541091} | {'accuracy': 0.822627037392138} |
| 0.0042 | 960.0 | 321600 | 1.6676 | {'f1': 0.8833876221498371} | {'accuracy': 0.8283796740172579} |
| 0.0042 | 961.0 | 321935 | 1.6440 | {'f1': 0.8813559322033898} | {'accuracy': 0.825503355704698} |
| 0.0034 | 962.0 | 322270 | 1.5868 | {'f1': 0.881201044386423} | {'accuracy': 0.825503355704698} |
| 0.0039 | 963.0 | 322605 | 1.6646 | {'f1': 0.8787096774193548} | {'accuracy': 0.8197507190795782} |
| 0.0039 | 964.0 | 322940 | 1.5576 | {'f1': 0.8825481088254811} | {'accuracy': 0.8302972195589645} |
| 0.0027 | 965.0 | 323275 | 1.6395 | {'f1': 0.883419689119171} | {'accuracy': 0.8274209012464045} |
| 0.0061 | 966.0 | 323610 | 1.5301 | {'f1': 0.8805870580386924} | {'accuracy': 0.8283796740172579} |
| 0.0061 | 967.0 | 323945 | 1.5664 | {'f1': 0.8811556139198949} | {'accuracy': 0.8264621284755513} |
| 0.0027 | 968.0 | 324280 | 1.5934 | {'f1': 0.8825438027255029} | {'accuracy': 0.8264621284755513} |
| 0.0046 | 969.0 | 324615 | 1.6040 | {'f1': 0.8841145833333333} | {'accuracy': 0.8293384467881112} |
| 0.0046 | 970.0 | 324950 | 1.5580 | {'f1': 0.8801054018445323} | {'accuracy': 0.825503355704698} |
| 0.0024 | 971.0 | 325285 | 1.6129 | {'f1': 0.8851174934725848} | {'accuracy': 0.8312559923298178} |
| 0.0025 | 972.0 | 325620 | 1.5886 | {'f1': 0.8799472295514512} | {'accuracy': 0.825503355704698} |
| 0.0025 | 973.0 | 325955 | 1.6544 | {'f1': 0.8852672750977836} | {'accuracy': 0.8312559923298178} |
| 0.0022 | 974.0 | 326290 | 1.6077 | {'f1': 0.8829300196206671} | {'accuracy': 0.8283796740172579} |
| 0.0055 | 975.0 | 326625 | 1.6093 | {'f1': 0.88} | {'accuracy': 0.8245445829338447} |
| 0.0055 | 976.0 | 326960 | 1.7068 | {'f1': 0.8787096774193548} | {'accuracy': 0.8197507190795782} |
| 0.003 | 977.0 | 327295 | 1.6335 | {'f1': 0.8809993425378042} | {'accuracy': 0.8264621284755513} |
| 0.0029 | 978.0 | 327630 | 1.6634 | {'f1': 0.8779220779220779} | {'accuracy': 0.8197507190795782} |
| 0.0029 | 979.0 | 327965 | 1.6256 | {'f1': 0.8825438027255029} | {'accuracy': 0.8264621284755513} |
| 0.0057 | 980.0 | 328300 | 1.6660 | {'f1': 0.8814432989690721} | {'accuracy': 0.8235858101629914} |
| 0.003 | 981.0 | 328635 | 1.5918 | {'f1': 0.8839344262295081} | {'accuracy': 0.8302972195589645} |
| 0.003 | 982.0 | 328970 | 1.7089 | {'f1': 0.8823151125401929} | {'accuracy': 0.8245445829338447} |
| 0.0038 | 983.0 | 329305 | 1.6115 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.0031 | 984.0 | 329640 | 1.5545 | {'f1': 0.8805280528052805} | {'accuracy': 0.8264621284755513} |
| 0.0031 | 985.0 | 329975 | 1.6240 | {'f1': 0.8823529411764706} | {'accuracy': 0.8274209012464045} |
| 0.0015 | 986.0 | 330310 | 1.6431 | {'f1': 0.8832684824902725} | {'accuracy': 0.8274209012464045} |
| 0.0035 | 987.0 | 330645 | 1.5873 | {'f1': 0.884967320261438} | {'accuracy': 0.8312559923298178} |
| 0.0035 | 988.0 | 330980 | 1.5421 | {'f1': 0.8837516512549538} | {'accuracy': 0.8312559923298178} |
| 0.0039 | 989.0 | 331315 | 1.5917 | {'f1': 0.8841826604897418} | {'accuracy': 0.8322147651006712} |
| 0.0014 | 990.0 | 331650 | 1.5912 | {'f1': 0.8875582168995342} | {'accuracy': 0.837967401725791} |
| 0.0014 | 991.0 | 331985 | 1.6806 | {'f1': 0.8835078534031414} | {'accuracy': 0.8293384467881112} |
| 0.0026 | 992.0 | 332320 | 1.6707 | {'f1': 0.8843626806833115} | {'accuracy': 0.8312559923298178} |
| 0.0017 | 993.0 | 332655 | 1.8238 | {'f1': 0.882466281310212} | {'accuracy': 0.8245445829338447} |
| 0.0017 | 994.0 | 332990 | 1.6485 | {'f1': 0.8822751322751322} | {'accuracy': 0.8293384467881112} |
| 0.0026 | 995.0 | 333325 | 1.6771 | {'f1': 0.8816219751471549} | {'accuracy': 0.8264621284755513} |
| 0.0035 | 996.0 | 333660 | 1.7282 | {'f1': 0.8803641092327698} | {'accuracy': 0.8235858101629914} |
| 0.0035 | 997.0 | 333995 | 1.7101 | {'f1': 0.8797920727745288} | {'accuracy': 0.822627037392138} |
| 0.0035 | 998.0 | 334330 | 1.7343 | {'f1': 0.8808290155440415} | {'accuracy': 0.8235858101629914} |
| 0.0025 | 999.0 | 334665 | 1.8057 | {'f1': 0.8784565916398713} | {'accuracy': 0.8187919463087249} |
| 0.0019 | 1000.0 | 335000 | 1.8351 | {'f1': 0.8804627249357326} | {'accuracy': 0.8216682646212847} |
| 0.0019 | 1001.0 | 335335 | 1.6279 | {'f1': 0.8844884488448844} | {'accuracy': 0.8322147651006712} |
| 0.006 | 1002.0 | 335670 | 1.6531 | {'f1': 0.8852459016393444} | {'accuracy': 0.8322147651006712} |
| 0.004 | 1003.0 | 336005 | 1.6674 | {'f1': 0.8843892880470282} | {'accuracy': 0.8302972195589645} |
| 0.004 | 1004.0 | 336340 | 1.7113 | {'f1': 0.882084690553746} | {'accuracy': 0.8264621284755513} |
| 0.0022 | 1005.0 | 336675 | 1.6658 | {'f1': 0.8835078534031414} | {'accuracy': 0.8293384467881112} |
| 0.0039 | 1006.0 | 337010 | 1.7137 | {'f1': 0.8788075178224238} | {'accuracy': 0.8207094918504314} |
| 0.0039 | 1007.0 | 337345 | 1.6736 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0052 | 1008.0 | 337680 | 1.5942 | {'f1': 0.8817766165904638} | {'accuracy': 0.8264621284755513} |
| 0.0072 | 1009.0 | 338015 | 1.6167 | {'f1': 0.8805681084570691} | {'accuracy': 0.822627037392138} |
| 0.0072 | 1010.0 | 338350 | 1.5201 | {'f1': 0.8883048620236531} | {'accuracy': 0.8370086289549377} |
| 0.0021 | 1011.0 | 338685 | 1.5181 | {'f1': 0.8846407382992749} | {'accuracy': 0.8322147651006712} |
| 0.0023 | 1012.0 | 339020 | 1.5751 | {'f1': 0.8871493803000652} | {'accuracy': 0.8341323106423778} |
| 0.0023 | 1013.0 | 339355 | 1.5257 | {'f1': 0.88561872909699} | {'accuracy': 0.8360498561840843} |
| 0.0023 | 1014.0 | 339690 | 1.6862 | {'f1': 0.8821635544108178} | {'accuracy': 0.8245445829338447} |
| 0.0035 | 1015.0 | 340025 | 1.5701 | {'f1': 0.8836291913214991} | {'accuracy': 0.8302972195589645} |
| 0.0035 | 1016.0 | 340360 | 1.5413 | {'f1': 0.8850498338870432} | {'accuracy': 0.8341323106423778} |
| 0.0026 | 1017.0 | 340695 | 1.7485 | {'f1': 0.8803088803088803} | {'accuracy': 0.8216682646212847} |
| 0.0028 | 1018.0 | 341030 | 1.7583 | {'f1': 0.8823151125401929} | {'accuracy': 0.8245445829338447} |
| 0.0028 | 1019.0 | 341365 | 1.7090 | {'f1': 0.8828478964401294} | {'accuracy': 0.8264621284755513} |
| 0.003 | 1020.0 | 341700 | 1.5417 | {'f1': 0.8875661375661376} | {'accuracy': 0.8370086289549377} |
| 0.0075 | 1021.0 | 342035 | 1.5904 | {'f1': 0.8874430709173715} | {'accuracy': 0.8341323106423778} |
| 0.0075 | 1022.0 | 342370 | 1.5910 | {'f1': 0.8865710560625816} | {'accuracy': 0.8331735378715245} |
| 0.0026 | 1023.0 | 342705 | 1.5547 | {'f1': 0.8859591298615689} | {'accuracy': 0.8341323106423778} |
| 0.0041 | 1024.0 | 343040 | 1.5638 | {'f1': 0.8891820580474935} | {'accuracy': 0.8389261744966443} |
| 0.0041 | 1025.0 | 343375 | 1.7581 | {'f1': 0.8795878943979395} | {'accuracy': 0.8207094918504314} |
| 0.004 | 1026.0 | 343710 | 1.5905 | {'f1': 0.8865435356200528} | {'accuracy': 0.835091083413231} |
| 0.0028 | 1027.0 | 344045 | 1.7152 | {'f1': 0.8831504196255648} | {'accuracy': 0.8264621284755513} |
| 0.0028 | 1028.0 | 344380 | 1.6984 | {'f1': 0.8847150259067358} | {'accuracy': 0.8293384467881112} |
| 0.0021 | 1029.0 | 344715 | 1.6455 | {'f1': 0.8875816993464052} | {'accuracy': 0.835091083413231} |
| 0.0029 | 1030.0 | 345050 | 1.5999 | {'f1': 0.8885959129861568} | {'accuracy': 0.837967401725791} |
| 0.0029 | 1031.0 | 345385 | 1.6865 | {'f1': 0.8861418347430059} | {'accuracy': 0.8322147651006712} |
| 0.0015 | 1032.0 | 345720 | 1.5991 | {'f1': 0.8887417218543047} | {'accuracy': 0.8389261744966443} |
| 0.0054 | 1033.0 | 346055 | 1.8527 | {'f1': 0.8778625954198473} | {'accuracy': 0.8159156279961649} |
| 0.0054 | 1034.0 | 346390 | 1.5213 | {'f1': 0.8918558077436584} | {'accuracy': 0.8446788111217641} |
| 0.0037 | 1035.0 | 346725 | 1.6267 | {'f1': 0.8868541530412034} | {'accuracy': 0.8341323106423778} |
| 0.0035 | 1036.0 | 347060 | 1.5801 | {'f1': 0.8883013879709186} | {'accuracy': 0.837967401725791} |
| 0.0035 | 1037.0 | 347395 | 1.6451 | {'f1': 0.8884540117416829} | {'accuracy': 0.8360498561840843} |
| 0.0037 | 1038.0 | 347730 | 1.5951 | {'f1': 0.8884488448844885} | {'accuracy': 0.837967401725791} |
| 0.0031 | 1039.0 | 348065 | 1.5840 | {'f1': 0.8874172185430463} | {'accuracy': 0.8370086289549377} |
| 0.0031 | 1040.0 | 348400 | 1.6453 | {'f1': 0.888597640891219} | {'accuracy': 0.8370086289549377} |
| 0.0031 | 1041.0 | 348735 | 1.6136 | {'f1': 0.8875661375661376} | {'accuracy': 0.8370086289549377} |
| 0.0022 | 1042.0 | 349070 | 1.6422 | {'f1': 0.8903566710700133} | {'accuracy': 0.840843720038351} |
| 0.0022 | 1043.0 | 349405 | 1.6441 | {'f1': 0.8881621975147155} | {'accuracy': 0.8360498561840843} |
| 0.0032 | 1044.0 | 349740 | 1.7125 | {'f1': 0.8841423948220065} | {'accuracy': 0.8283796740172579} |
| 0.0044 | 1045.0 | 350075 | 1.6543 | {'f1': 0.8859934853420196} | {'accuracy': 0.8322147651006712} |
| 0.0044 | 1046.0 | 350410 | 1.6956 | {'f1': 0.8847150259067358} | {'accuracy': 0.8293384467881112} |
| 0.0042 | 1047.0 | 350745 | 1.5667 | {'f1': 0.8899143045484509} | {'accuracy': 0.8398849472674976} |
| 0.0041 | 1048.0 | 351080 | 1.5807 | {'f1': 0.8861256544502618} | {'accuracy': 0.8331735378715245} |
| 0.0041 | 1049.0 | 351415 | 1.5926 | {'f1': 0.8858267716535433} | {'accuracy': 0.8331735378715245} |
| 0.0018 | 1050.0 | 351750 | 1.6285 | {'f1': 0.8856956237753103} | {'accuracy': 0.8322147651006712} |
| 0.0029 | 1051.0 | 352085 | 1.5853 | {'f1': 0.8874259381171825} | {'accuracy': 0.8360498561840843} |
| 0.0029 | 1052.0 | 352420 | 1.6858 | {'f1': 0.8826597131681878} | {'accuracy': 0.8274209012464045} |
| 0.0032 | 1053.0 | 352755 | 1.7886 | {'f1': 0.8808757244043787} | {'accuracy': 0.822627037392138} |
| 0.0023 | 1054.0 | 353090 | 1.6543 | {'f1': 0.8837820091923834} | {'accuracy': 0.8302972195589645} |
| 0.0023 | 1055.0 | 353425 | 1.6165 | {'f1': 0.8839050131926122} | {'accuracy': 0.8312559923298178} |
| 0.0036 | 1056.0 | 353760 | 1.6832 | {'f1': 0.8833876221498371} | {'accuracy': 0.8283796740172579} |
| 0.0036 | 1057.0 | 354095 | 1.6402 | {'f1': 0.8852672750977836} | {'accuracy': 0.8312559923298178} |
| 0.0036 | 1058.0 | 354430 | 1.6821 | {'f1': 0.8828478964401294} | {'accuracy': 0.8264621284755513} |
| 0.0024 | 1059.0 | 354765 | 1.5917 | {'f1': 0.8852242744063324} | {'accuracy': 0.8331735378715245} |
| 0.0043 | 1060.0 | 355100 | 1.6325 | {'f1': 0.8851174934725848} | {'accuracy': 0.8312559923298178} |
| 0.0043 | 1061.0 | 355435 | 1.5850 | {'f1': 0.8846153846153847} | {'accuracy': 0.8331735378715245} |
| 0.0039 | 1062.0 | 355770 | 1.5670 | {'f1': 0.885506287227002} | {'accuracy': 0.8341323106423778} |
| 0.0019 | 1063.0 | 356105 | 1.6631 | {'f1': 0.8880208333333334} | {'accuracy': 0.835091083413231} |
| 0.0019 | 1064.0 | 356440 | 1.5969 | {'f1': 0.886259040105194} | {'accuracy': 0.8341323106423778} |
| 0.0036 | 1065.0 | 356775 | 1.5846 | {'f1': 0.8839050131926122} | {'accuracy': 0.8312559923298178} |
| 0.0029 | 1066.0 | 357110 | 1.6015 | {'f1': 0.8872870249017039} | {'accuracy': 0.835091083413231} |
| 0.0029 | 1067.0 | 357445 | 1.6231 | {'f1': 0.8862745098039215} | {'accuracy': 0.8331735378715245} |
| 0.0028 | 1068.0 | 357780 | 1.5626 | {'f1': 0.88021534320323} | {'accuracy': 0.8293384467881112} |
| 0.0032 | 1069.0 | 358115 | 1.6495 | {'f1': 0.8842380640941793} | {'accuracy': 0.8302972195589645} |
| 0.0032 | 1070.0 | 358450 | 1.6486 | {'f1': 0.8825065274151437} | {'accuracy': 0.8274209012464045} |
| 0.0047 | 1071.0 | 358785 | 1.6075 | {'f1': 0.8820039551746869} | {'accuracy': 0.8283796740172579} |
| 0.0028 | 1072.0 | 359120 | 1.7135 | {'f1': 0.8799480856586632} | {'accuracy': 0.822627037392138} |
| 0.0028 | 1073.0 | 359455 | 1.6339 | {'f1': 0.8790482485128884} | {'accuracy': 0.8245445829338447} |
| 0.0031 | 1074.0 | 359790 | 1.6787 | {'f1': 0.8830829523187459} | {'accuracy': 0.8283796740172579} |
| 0.0048 | 1075.0 | 360125 | 1.6607 | {'f1': 0.8797920727745288} | {'accuracy': 0.822627037392138} |
| 0.0048 | 1076.0 | 360460 | 1.7147 | {'f1': 0.8777633289986997} | {'accuracy': 0.8197507190795782} |
| 0.0016 | 1077.0 | 360795 | 1.6353 | {'f1': 0.8782722513089006} | {'accuracy': 0.8216682646212847} |
| 0.0038 | 1078.0 | 361130 | 1.5526 | {'f1': 0.8822355289421158} | {'accuracy': 0.8302972195589645} |
| 0.0038 | 1079.0 | 361465 | 1.6613 | {'f1': 0.8793774319066148} | {'accuracy': 0.8216682646212847} |
| 0.0038 | 1080.0 | 361800 | 1.6224 | {'f1': 0.8816219751471549} | {'accuracy': 0.8264621284755513} |
| 0.0014 | 1081.0 | 362135 | 1.6471 | {'f1': 0.8804702808621816} | {'accuracy': 0.8245445829338447} |
| 0.0014 | 1082.0 | 362470 | 1.6292 | {'f1': 0.8858267716535433} | {'accuracy': 0.8331735378715245} |
| 0.0035 | 1083.0 | 362805 | 1.6660 | {'f1': 0.8826597131681878} | {'accuracy': 0.8274209012464045} |
| 0.0024 | 1084.0 | 363140 | 1.6730 | {'f1': 0.8807817589576548} | {'accuracy': 0.8245445829338447} |
| 0.0024 | 1085.0 | 363475 | 1.6435 | {'f1': 0.8832354859752121} | {'accuracy': 0.8283796740172579} |
| 0.0037 | 1086.0 | 363810 | 1.5428 | {'f1': 0.8847682119205298} | {'accuracy': 0.8331735378715245} |
| 0.0048 | 1087.0 | 364145 | 1.5604 | {'f1': 0.8858267716535433} | {'accuracy': 0.8331735378715245} |
| 0.0048 | 1088.0 | 364480 | 1.6473 | {'f1': 0.8780804150453955} | {'accuracy': 0.8197507190795782} |
| 0.0042 | 1089.0 | 364815 | 1.5494 | {'f1': 0.8859591298615689} | {'accuracy': 0.8341323106423778} |
| 0.0025 | 1090.0 | 365150 | 1.6817 | {'f1': 0.8824289405684754} | {'accuracy': 0.825503355704698} |
| 0.0025 | 1091.0 | 365485 | 1.5962 | {'f1': 0.8849441157133465} | {'accuracy': 0.8322147651006712} |
| 0.0027 | 1092.0 | 365820 | 1.6261 | {'f1': 0.8827037773359842} | {'accuracy': 0.8302972195589645} |
| 0.0023 | 1093.0 | 366155 | 1.6884 | {'f1': 0.8865845755022683} | {'accuracy': 0.8322147651006712} |
| 0.0023 | 1094.0 | 366490 | 1.6529 | {'f1': 0.8865710560625816} | {'accuracy': 0.8331735378715245} |
| 0.0048 | 1095.0 | 366825 | 1.6021 | {'f1': 0.8860927152317881} | {'accuracy': 0.835091083413231} |
| 0.0043 | 1096.0 | 367160 | 1.6417 | {'f1': 0.8864229765013055} | {'accuracy': 0.8331735378715245} |
| 0.0043 | 1097.0 | 367495 | 1.5660 | {'f1': 0.8859591298615689} | {'accuracy': 0.8341323106423778} |
| 0.0036 | 1098.0 | 367830 | 1.5302 | {'f1': 0.885486018641811} | {'accuracy': 0.835091083413231} |
| 0.0025 | 1099.0 | 368165 | 1.5780 | {'f1': 0.8856382978723404} | {'accuracy': 0.835091083413231} |
| 0.0034 | 1100.0 | 368500 | 1.6380 | {'f1': 0.8813559322033898} | {'accuracy': 0.825503355704698} |
| 0.0034 | 1101.0 | 368835 | 1.5353 | {'f1': 0.8877076411960134} | {'accuracy': 0.837967401725791} |
| 0.0028 | 1102.0 | 369170 | 1.5930 | {'f1': 0.8850952068286276} | {'accuracy': 0.8322147651006712} |
| 0.0016 | 1103.0 | 369505 | 1.6172 | {'f1': 0.8843892880470282} | {'accuracy': 0.8302972195589645} |
| 0.0016 | 1104.0 | 369840 | 1.6414 | {'f1': 0.8843892880470282} | {'accuracy': 0.8302972195589645} |
| 0.0027 | 1105.0 | 370175 | 1.6066 | {'f1': 0.8848167539267016} | {'accuracy': 0.8312559923298178} |
| 0.003 | 1106.0 | 370510 | 1.6791 | {'f1': 0.8868660598179455} | {'accuracy': 0.8331735378715245} |
| 0.003 | 1107.0 | 370845 | 1.6333 | {'f1': 0.8833551769331585} | {'accuracy': 0.8293384467881112} |
| 0.0047 | 1108.0 | 371180 | 1.5901 | {'f1': 0.8875661375661376} | {'accuracy': 0.8370086289549377} |
| 0.0045 | 1109.0 | 371515 | 1.5836 | {'f1': 0.8865573770491804} | {'accuracy': 0.8341323106423778} |
| 0.0045 | 1110.0 | 371850 | 1.5848 | {'f1': 0.8871391076115486} | {'accuracy': 0.835091083413231} |
| 0.0027 | 1111.0 | 372185 | 1.5712 | {'f1': 0.8864229765013055} | {'accuracy': 0.8331735378715245} |
| 0.0036 | 1112.0 | 372520 | 1.6526 | {'f1': 0.8852883992222943} | {'accuracy': 0.8302972195589645} |
| 0.0036 | 1113.0 | 372855 | 1.5732 | {'f1': 0.8881578947368421} | {'accuracy': 0.8370086289549377} |
| 0.0039 | 1114.0 | 373190 | 1.5640 | {'f1': 0.8906455862977603} | {'accuracy': 0.840843720038351} |
| 0.0042 | 1115.0 | 373525 | 1.5411 | {'f1': 0.8887417218543047} | {'accuracy': 0.8389261744966443} |
| 0.0042 | 1116.0 | 373860 | 1.5553 | {'f1': 0.889920424403183} | {'accuracy': 0.840843720038351} |
| 0.002 | 1117.0 | 374195 | 1.6430 | {'f1': 0.8839922229423202} | {'accuracy': 0.8283796740172579} |
| 0.003 | 1118.0 | 374530 | 1.5749 | {'f1': 0.8885959129861568} | {'accuracy': 0.837967401725791} |
| 0.003 | 1119.0 | 374865 | 1.6465 | {'f1': 0.8861418347430059} | {'accuracy': 0.8322147651006712} |
| 0.0028 | 1120.0 | 375200 | 1.6485 | {'f1': 0.8838416612589227} | {'accuracy': 0.8283796740172579} |
| 0.0022 | 1121.0 | 375535 | 1.6010 | {'f1': 0.8850952068286276} | {'accuracy': 0.8322147651006712} |
| 0.0022 | 1122.0 | 375870 | 1.6192 | {'f1': 0.8871391076115486} | {'accuracy': 0.835091083413231} |
| 0.0031 | 1123.0 | 376205 | 1.5876 | {'f1': 0.8865435356200528} | {'accuracy': 0.835091083413231} |
| 0.0014 | 1124.0 | 376540 | 1.6109 | {'f1': 0.8853754940711462} | {'accuracy': 0.8331735378715245} |
| 0.0014 | 1125.0 | 376875 | 1.6595 | {'f1': 0.8865710560625816} | {'accuracy': 0.8331735378715245} |
| 0.0018 | 1126.0 | 377210 | 1.5596 | {'f1': 0.8897795591182364} | {'accuracy': 0.8418024928092043} |
| 0.0024 | 1127.0 | 377545 | 1.6213 | {'f1': 0.8863936591809775} | {'accuracy': 0.835091083413231} |
| 0.0024 | 1128.0 | 377880 | 1.5969 | {'f1': 0.8842809364548496} | {'accuracy': 0.8341323106423778} |
| 0.0024 | 1129.0 | 378215 | 1.5904 | {'f1': 0.885887913571911} | {'accuracy': 0.837967401725791} |
| 0.0032 | 1130.0 | 378550 | 1.7256 | {'f1': 0.8862897985705004} | {'accuracy': 0.8322147651006712} |
| 0.0032 | 1131.0 | 378885 | 1.8606 | {'f1': 0.8789237668161436} | {'accuracy': 0.8187919463087249} |
| 0.0019 | 1132.0 | 379220 | 1.7037 | {'f1': 0.8850726552179656} | {'accuracy': 0.8331735378715245} |
| 0.0026 | 1133.0 | 379555 | 1.7805 | {'f1': 0.8845401174168297} | {'accuracy': 0.8302972195589645} |
| 0.0026 | 1134.0 | 379890 | 1.8544 | {'f1': 0.8815958815958816} | {'accuracy': 0.8235858101629914} |
| 0.0028 | 1135.0 | 380225 | 1.7224 | {'f1': 0.883322346736981} | {'accuracy': 0.8302972195589645} |
| 0.0018 | 1136.0 | 380560 | 1.8129 | {'f1': 0.8838416612589227} | {'accuracy': 0.8283796740172579} |
| 0.0018 | 1137.0 | 380895 | 1.6931 | {'f1': 0.885506287227002} | {'accuracy': 0.8341323106423778} |
| 0.003 | 1138.0 | 381230 | 1.7690 | {'f1': 0.8831168831168831} | {'accuracy': 0.8274209012464045} |
| 0.003 | 1139.0 | 381565 | 1.7252 | {'f1': 0.886408404464872} | {'accuracy': 0.8341323106423778} |
| 0.003 | 1140.0 | 381900 | 1.7165 | {'f1': 0.8846407382992749} | {'accuracy': 0.8322147651006712} |
| 0.0018 | 1141.0 | 382235 | 1.7154 | {'f1': 0.8877216021011162} | {'accuracy': 0.8360498561840843} |
| 0.0027 | 1142.0 | 382570 | 1.6908 | {'f1': 0.8821192052980134} | {'accuracy': 0.8293384467881112} |
| 0.0027 | 1143.0 | 382905 | 1.7904 | {'f1': 0.8852883992222943} | {'accuracy': 0.8302972195589645} |
| 0.0021 | 1144.0 | 383240 | 1.8250 | {'f1': 0.8825806451612904} | {'accuracy': 0.825503355704698} |
| 0.0016 | 1145.0 | 383575 | 1.6276 | {'f1': 0.8877005347593583} | {'accuracy': 0.8389261744966443} |
| 0.0016 | 1146.0 | 383910 | 1.6543 | {'f1': 0.8875661375661376} | {'accuracy': 0.8370086289549377} |
| 0.0039 | 1147.0 | 384245 | 1.6855 | {'f1': 0.8847926267281107} | {'accuracy': 0.8322147651006712} |
| 0.0015 | 1148.0 | 384580 | 1.7470 | {'f1': 0.8880157170923381} | {'accuracy': 0.8360498561840843} |
| 0.0015 | 1149.0 | 384915 | 1.7802 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.0016 | 1150.0 | 385250 | 1.7562 | {'f1': 0.8851174934725848} | {'accuracy': 0.8312559923298178} |
| 0.0018 | 1151.0 | 385585 | 1.7715 | {'f1': 0.8856956237753103} | {'accuracy': 0.8322147651006712} |
| 0.0018 | 1152.0 | 385920 | 1.7989 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0019 | 1153.0 | 386255 | 1.8137 | {'f1': 0.8851395197923426} | {'accuracy': 0.8302972195589645} |
| 0.0042 | 1154.0 | 386590 | 1.7496 | {'f1': 0.8858267716535433} | {'accuracy': 0.8331735378715245} |
| 0.0042 | 1155.0 | 386925 | 1.7889 | {'f1': 0.8836601307189542} | {'accuracy': 0.8293384467881112} |
| 0.003 | 1156.0 | 387260 | 1.6654 | {'f1': 0.8856576338400529} | {'accuracy': 0.8341323106423778} |
| 0.0027 | 1157.0 | 387595 | 1.9299 | {'f1': 0.8777070063694268} | {'accuracy': 0.8159156279961649} |
| 0.0027 | 1158.0 | 387930 | 1.6656 | {'f1': 0.8861092824226466} | {'accuracy': 0.8341323106423778} |
| 0.0044 | 1159.0 | 388265 | 1.7469 | {'f1': 0.8828478964401294} | {'accuracy': 0.8264621284755513} |
| 0.0026 | 1160.0 | 388600 | 1.7098 | {'f1': 0.8836601307189542} | {'accuracy': 0.8293384467881112} |
| 0.0026 | 1161.0 | 388935 | 1.7530 | {'f1': 0.8826597131681878} | {'accuracy': 0.8274209012464045} |
| 0.0017 | 1162.0 | 389270 | 1.7884 | {'f1': 0.8809831824062095} | {'accuracy': 0.8235858101629914} |
| 0.0028 | 1163.0 | 389605 | 1.7387 | {'f1': 0.8848167539267016} | {'accuracy': 0.8312559923298178} |
| 0.0028 | 1164.0 | 389940 | 1.7094 | {'f1': 0.8852242744063324} | {'accuracy': 0.8331735378715245} |
| 0.0027 | 1165.0 | 390275 | 1.8461 | {'f1': 0.8811817597944765} | {'accuracy': 0.822627037392138} |
| 0.0037 | 1166.0 | 390610 | 1.7538 | {'f1': 0.8827766863130322} | {'accuracy': 0.8283796740172579} |
| 0.0037 | 1167.0 | 390945 | 1.7664 | {'f1': 0.8826597131681878} | {'accuracy': 0.8274209012464045} |
| 0.0016 | 1168.0 | 391280 | 1.7127 | {'f1': 0.8811556139198949} | {'accuracy': 0.8264621284755513} |
| 0.0028 | 1169.0 | 391615 | 1.7404 | {'f1': 0.8816219751471549} | {'accuracy': 0.8264621284755513} |
| 0.0028 | 1170.0 | 391950 | 1.6900 | {'f1': 0.8840579710144927} | {'accuracy': 0.8312559923298178} |
| 0.0024 | 1171.0 | 392285 | 1.6964 | {'f1': 0.8849441157133465} | {'accuracy': 0.8322147651006712} |
| 0.0023 | 1172.0 | 392620 | 1.7047 | {'f1': 0.8849441157133465} | {'accuracy': 0.8322147651006712} |
| 0.0023 | 1173.0 | 392955 | 1.7315 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0031 | 1174.0 | 393290 | 1.6667 | {'f1': 0.8871287128712871} | {'accuracy': 0.8360498561840843} |
| 0.0017 | 1175.0 | 393625 | 1.7351 | {'f1': 0.8838120104438641} | {'accuracy': 0.8293384467881112} |
| 0.0017 | 1176.0 | 393960 | 1.7235 | {'f1': 0.8861256544502618} | {'accuracy': 0.8331735378715245} |
| 0.0021 | 1177.0 | 394295 | 1.6348 | {'f1': 0.885486018641811} | {'accuracy': 0.835091083413231} |
| 0.0058 | 1178.0 | 394630 | 1.6370 | {'f1': 0.8852023888520238} | {'accuracy': 0.8341323106423778} |
| 0.0058 | 1179.0 | 394965 | 1.6521 | {'f1': 0.8835662009314704} | {'accuracy': 0.8322147651006712} |
| 0.002 | 1180.0 | 395300 | 1.6771 | {'f1': 0.8852242744063324} | {'accuracy': 0.8331735378715245} |
| 0.0045 | 1181.0 | 395635 | 1.6599 | {'f1': 0.8851174934725848} | {'accuracy': 0.8312559923298178} |
| 0.0045 | 1182.0 | 395970 | 1.6023 | {'f1': 0.8840291583830351} | {'accuracy': 0.8322147651006712} |
| 0.0018 | 1183.0 | 396305 | 1.6570 | {'f1': 0.8872870249017039} | {'accuracy': 0.835091083413231} |
| 0.0023 | 1184.0 | 396640 | 1.7207 | {'f1': 0.8809831824062095} | {'accuracy': 0.8235858101629914} |
| 0.0023 | 1185.0 | 396975 | 1.6423 | {'f1': 0.888597640891219} | {'accuracy': 0.8370086289549377} |
| 0.0021 | 1186.0 | 397310 | 1.6735 | {'f1': 0.8843892880470282} | {'accuracy': 0.8302972195589645} |
| 0.0035 | 1187.0 | 397645 | 1.6051 | {'f1': 0.8866930171277998} | {'accuracy': 0.835091083413231} |
| 0.0035 | 1188.0 | 397980 | 1.7041 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0018 | 1189.0 | 398315 | 1.6708 | {'f1': 0.885396201702685} | {'accuracy': 0.8322147651006712} |
| 0.0022 | 1190.0 | 398650 | 1.6910 | {'f1': 0.8839634941329856} | {'accuracy': 0.8293384467881112} |
| 0.0022 | 1191.0 | 398985 | 1.5801 | {'f1': 0.888443553774215} | {'accuracy': 0.8398849472674976} |
| 0.003 | 1192.0 | 399320 | 1.7103 | {'f1': 0.8842380640941793} | {'accuracy': 0.8302972195589645} |
| 0.0028 | 1193.0 | 399655 | 1.7610 | {'f1': 0.8822381262199089} | {'accuracy': 0.8264621284755513} |
| 0.0028 | 1194.0 | 399990 | 1.6810 | {'f1': 0.8875739644970414} | {'accuracy': 0.8360498561840843} |
| 0.0018 | 1195.0 | 400325 | 1.6908 | {'f1': 0.8869908015768725} | {'accuracy': 0.835091083413231} |
| 0.0017 | 1196.0 | 400660 | 1.7498 | {'f1': 0.8843892880470282} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1197.0 | 400995 | 1.7100 | {'f1': 0.8858085808580859} | {'accuracy': 0.8341323106423778} |
| 0.0031 | 1198.0 | 401330 | 1.7323 | {'f1': 0.8875739644970414} | {'accuracy': 0.8360498561840843} |
| 0.0015 | 1199.0 | 401665 | 1.8114 | {'f1': 0.8861418347430059} | {'accuracy': 0.8322147651006712} |
| 0.0027 | 1200.0 | 402000 | 1.7582 | {'f1': 0.8843892880470282} | {'accuracy': 0.8302972195589645} |
| 0.0027 | 1201.0 | 402335 | 1.6836 | {'f1': 0.8896276595744681} | {'accuracy': 0.840843720038351} |
| 0.0028 | 1202.0 | 402670 | 1.7912 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0028 | 1203.0 | 403005 | 1.8033 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.0028 | 1204.0 | 403340 | 1.7985 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.0019 | 1205.0 | 403675 | 1.7905 | {'f1': 0.8851174934725848} | {'accuracy': 0.8312559923298178} |
| 0.0032 | 1206.0 | 404010 | 1.7356 | {'f1': 0.886408404464872} | {'accuracy': 0.8341323106423778} |
| 0.0032 | 1207.0 | 404345 | 1.6976 | {'f1': 0.8855461085676913} | {'accuracy': 0.8322147651006712} |
| 0.0044 | 1208.0 | 404680 | 1.7209 | {'f1': 0.8858447488584474} | {'accuracy': 0.8322147651006712} |
| 0.0016 | 1209.0 | 405015 | 1.7403 | {'f1': 0.8841145833333333} | {'accuracy': 0.8293384467881112} |
| 0.0016 | 1210.0 | 405350 | 1.6621 | {'f1': 0.8858085808580859} | {'accuracy': 0.8341323106423778} |
| 0.0032 | 1211.0 | 405685 | 1.7098 | {'f1': 0.8872870249017039} | {'accuracy': 0.835091083413231} |
| 0.0022 | 1212.0 | 406020 | 1.6697 | {'f1': 0.8863936591809775} | {'accuracy': 0.835091083413231} |
| 0.0022 | 1213.0 | 406355 | 1.7366 | {'f1': 0.88296488946684} | {'accuracy': 0.8274209012464045} |
| 0.0025 | 1214.0 | 406690 | 1.6412 | {'f1': 0.8887417218543047} | {'accuracy': 0.8389261744966443} |
| 0.0026 | 1215.0 | 407025 | 1.7191 | {'f1': 0.8838120104438641} | {'accuracy': 0.8293384467881112} |
| 0.0026 | 1216.0 | 407360 | 1.6733 | {'f1': 0.8884514435695539} | {'accuracy': 0.8370086289549377} |
| 0.0035 | 1217.0 | 407695 | 1.7227 | {'f1': 0.8855656697009102} | {'accuracy': 0.8312559923298178} |
| 0.0018 | 1218.0 | 408030 | 1.7874 | {'f1': 0.883419689119171} | {'accuracy': 0.8274209012464045} |
| 0.0018 | 1219.0 | 408365 | 1.7690 | {'f1': 0.8851395197923426} | {'accuracy': 0.8302972195589645} |
| 0.0022 | 1220.0 | 408700 | 1.8313 | {'f1': 0.8808757244043787} | {'accuracy': 0.822627037392138} |
| 0.0052 | 1221.0 | 409035 | 1.7186 | {'f1': 0.8851174934725848} | {'accuracy': 0.8312559923298178} |
| 0.0052 | 1222.0 | 409370 | 1.7555 | {'f1': 0.8812459441920829} | {'accuracy': 0.8245445829338447} |
| 0.0039 | 1223.0 | 409705 | 1.7314 | {'f1': 0.8798955613577023} | {'accuracy': 0.8235858101629914} |
| 0.0031 | 1224.0 | 410040 | 1.6025 | {'f1': 0.8877005347593583} | {'accuracy': 0.8389261744966443} |
| 0.0031 | 1225.0 | 410375 | 1.6785 | {'f1': 0.886259040105194} | {'accuracy': 0.8341323106423778} |
| 0.0018 | 1226.0 | 410710 | 1.8083 | {'f1': 0.8803088803088803} | {'accuracy': 0.8216682646212847} |
| 0.0018 | 1227.0 | 411045 | 1.6697 | {'f1': 0.8832555036691127} | {'accuracy': 0.8322147651006712} |
| 0.0018 | 1228.0 | 411380 | 1.7528 | {'f1': 0.8828125} | {'accuracy': 0.8274209012464045} |
| 0.0034 | 1229.0 | 411715 | 1.7647 | {'f1': 0.881664499349805} | {'accuracy': 0.825503355704698} |
| 0.0016 | 1230.0 | 412050 | 1.7080 | {'f1': 0.886259040105194} | {'accuracy': 0.8341323106423778} |
| 0.0016 | 1231.0 | 412385 | 1.7002 | {'f1': 0.8866930171277998} | {'accuracy': 0.835091083413231} |
| 0.0018 | 1232.0 | 412720 | 1.7542 | {'f1': 0.8858447488584474} | {'accuracy': 0.8322147651006712} |
| 0.0034 | 1233.0 | 413055 | 1.7277 | {'f1': 0.8841145833333333} | {'accuracy': 0.8293384467881112} |
| 0.0034 | 1234.0 | 413390 | 1.6909 | {'f1': 0.8859764089121887} | {'accuracy': 0.8331735378715245} |
| 0.0015 | 1235.0 | 413725 | 1.7052 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0025 | 1236.0 | 414060 | 1.7913 | {'f1': 0.8831504196255648} | {'accuracy': 0.8264621284755513} |
| 0.0025 | 1237.0 | 414395 | 1.7496 | {'f1': 0.8861418347430059} | {'accuracy': 0.8322147651006712} |
| 0.0029 | 1238.0 | 414730 | 1.6863 | {'f1': 0.8890347997373604} | {'accuracy': 0.837967401725791} |
| 0.0027 | 1239.0 | 415065 | 1.7087 | {'f1': 0.8805681084570691} | {'accuracy': 0.822627037392138} |
| 0.0027 | 1240.0 | 415400 | 1.5965 | {'f1': 0.8891820580474935} | {'accuracy': 0.8389261744966443} |
| 0.0054 | 1241.0 | 415735 | 1.6261 | {'f1': 0.887434554973822} | {'accuracy': 0.835091083413231} |
| 0.0017 | 1242.0 | 416070 | 1.5655 | {'f1': 0.888888888888889} | {'accuracy': 0.8389261744966443} |
| 0.0017 | 1243.0 | 416405 | 1.5952 | {'f1': 0.8884488448844885} | {'accuracy': 0.837967401725791} |
| 0.0033 | 1244.0 | 416740 | 1.6296 | {'f1': 0.8875816993464052} | {'accuracy': 0.835091083413231} |
| 0.0029 | 1245.0 | 417075 | 1.6133 | {'f1': 0.8883048620236531} | {'accuracy': 0.8370086289549377} |
| 0.0029 | 1246.0 | 417410 | 1.6047 | {'f1': 0.8878627968337731} | {'accuracy': 0.8370086289549377} |
| 0.0021 | 1247.0 | 417745 | 1.7268 | {'f1': 0.8835705045278138} | {'accuracy': 0.8274209012464045} |
| 0.0022 | 1248.0 | 418080 | 1.6843 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0022 | 1249.0 | 418415 | 1.6683 | {'f1': 0.8875816993464052} | {'accuracy': 0.835091083413231} |
| 0.0027 | 1250.0 | 418750 | 1.6704 | {'f1': 0.8881621975147155} | {'accuracy': 0.8360498561840843} |
| 0.0022 | 1251.0 | 419085 | 1.7550 | {'f1': 0.8833010960670536} | {'accuracy': 0.8264621284755513} |
| 0.0022 | 1252.0 | 419420 | 1.6882 | {'f1': 0.8865710560625816} | {'accuracy': 0.8331735378715245} |
| 0.004 | 1253.0 | 419755 | 1.6692 | {'f1': 0.8868541530412034} | {'accuracy': 0.8341323106423778} |
| 0.0022 | 1254.0 | 420090 | 1.6207 | {'f1': 0.8853333333333333} | {'accuracy': 0.835091083413231} |
| 0.0022 | 1255.0 | 420425 | 1.6777 | {'f1': 0.8872964169381107} | {'accuracy': 0.8341323106423778} |
| 0.0016 | 1256.0 | 420760 | 1.6322 | {'f1': 0.8885941644562333} | {'accuracy': 0.8389261744966443} |
| 0.0021 | 1257.0 | 421095 | 1.6463 | {'f1': 0.8880105401844532} | {'accuracy': 0.8370086289549377} |
| 0.0021 | 1258.0 | 421430 | 1.6737 | {'f1': 0.8884540117416829} | {'accuracy': 0.8360498561840843} |
| 0.0023 | 1259.0 | 421765 | 1.6487 | {'f1': 0.8890347997373604} | {'accuracy': 0.837967401725791} |
| 0.0034 | 1260.0 | 422100 | 1.6921 | {'f1': 0.8865710560625816} | {'accuracy': 0.8331735378715245} |
| 0.0034 | 1261.0 | 422435 | 1.5953 | {'f1': 0.8887408394403731} | {'accuracy': 0.8398849472674976} |
| 0.0045 | 1262.0 | 422770 | 1.7129 | {'f1': 0.8845654993514916} | {'accuracy': 0.8293384467881112} |
| 0.0017 | 1263.0 | 423105 | 1.7699 | {'f1': 0.8811369509043928} | {'accuracy': 0.8235858101629914} |
| 0.0017 | 1264.0 | 423440 | 1.6435 | {'f1': 0.8875816993464052} | {'accuracy': 0.835091083413231} |
| 0.0024 | 1265.0 | 423775 | 1.7540 | {'f1': 0.8808757244043787} | {'accuracy': 0.822627037392138} |
| 0.0042 | 1266.0 | 424110 | 1.6861 | {'f1': 0.8858625162127108} | {'accuracy': 0.8312559923298178} |
| 0.0042 | 1267.0 | 424445 | 1.6125 | {'f1': 0.8858267716535433} | {'accuracy': 0.8331735378715245} |
| 0.004 | 1268.0 | 424780 | 1.6888 | {'f1': 0.8839922229423202} | {'accuracy': 0.8283796740172579} |
| 0.0015 | 1269.0 | 425115 | 1.6839 | {'f1': 0.883419689119171} | {'accuracy': 0.8274209012464045} |
| 0.0015 | 1270.0 | 425450 | 1.6722 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.0029 | 1271.0 | 425785 | 1.6979 | {'f1': 0.8838416612589227} | {'accuracy': 0.8283796740172579} |
| 0.0024 | 1272.0 | 426120 | 1.5824 | {'f1': 0.8862433862433863} | {'accuracy': 0.835091083413231} |
| 0.0024 | 1273.0 | 426455 | 1.7985 | {'f1': 0.8787684413085312} | {'accuracy': 0.8187919463087249} |
| 0.0044 | 1274.0 | 426790 | 1.6429 | {'f1': 0.8830486202365309} | {'accuracy': 0.8293384467881112} |
| 0.0025 | 1275.0 | 427125 | 1.5866 | {'f1': 0.8860759493670886} | {'accuracy': 0.8360498561840843} |
| 0.0025 | 1276.0 | 427460 | 1.7528 | {'f1': 0.8792769528728211} | {'accuracy': 0.8207094918504314} |
| 0.0028 | 1277.0 | 427795 | 1.8184 | {'f1': 0.8780487804878049} | {'accuracy': 0.8178331735378715} |
| 0.0016 | 1278.0 | 428130 | 1.7036 | {'f1': 0.8820445609436436} | {'accuracy': 0.8274209012464045} |
| 0.0016 | 1279.0 | 428465 | 1.7409 | {'f1': 0.8800521512385918} | {'accuracy': 0.8235858101629914} |
| 0.0022 | 1280.0 | 428800 | 1.7406 | {'f1': 0.8780169602087411} | {'accuracy': 0.8207094918504314} |
| 0.0039 | 1281.0 | 429135 | 1.6932 | {'f1': 0.8804702808621816} | {'accuracy': 0.8245445829338447} |
| 0.0039 | 1282.0 | 429470 | 1.6137 | {'f1': 0.8853545394300861} | {'accuracy': 0.8341323106423778} |
| 0.0026 | 1283.0 | 429805 | 1.7429 | {'f1': 0.8790637191157347} | {'accuracy': 0.8216682646212847} |
| 0.0017 | 1284.0 | 430140 | 1.6462 | {'f1': 0.8860927152317881} | {'accuracy': 0.835091083413231} |
| 0.0017 | 1285.0 | 430475 | 1.6180 | {'f1': 0.8874083944037308} | {'accuracy': 0.837967401725791} |
| 0.0024 | 1286.0 | 430810 | 1.7098 | {'f1': 0.8816219751471549} | {'accuracy': 0.8264621284755513} |
| 0.0015 | 1287.0 | 431145 | 1.6675 | {'f1': 0.8856382978723404} | {'accuracy': 0.835091083413231} |
| 0.0015 | 1288.0 | 431480 | 1.6877 | {'f1': 0.8844884488448844} | {'accuracy': 0.8322147651006712} |
| 0.0033 | 1289.0 | 431815 | 1.7807 | {'f1': 0.8799480856586632} | {'accuracy': 0.822627037392138} |
| 0.0045 | 1290.0 | 432150 | 1.7126 | {'f1': 0.8797920727745288} | {'accuracy': 0.822627037392138} |
| 0.0045 | 1291.0 | 432485 | 1.6483 | {'f1': 0.8806262230919765} | {'accuracy': 0.8245445829338447} |
| 0.0055 | 1292.0 | 432820 | 1.7432 | {'f1': 0.8798449612403101} | {'accuracy': 0.8216682646212847} |
| 0.0035 | 1293.0 | 433155 | 1.6595 | {'f1': 0.8797920727745288} | {'accuracy': 0.822627037392138} |
| 0.0035 | 1294.0 | 433490 | 1.6534 | {'f1': 0.8802083333333333} | {'accuracy': 0.8235858101629914} |
| 0.0024 | 1295.0 | 433825 | 1.5400 | {'f1': 0.8856382978723404} | {'accuracy': 0.835091083413231} |
| 0.0025 | 1296.0 | 434160 | 1.5690 | {'f1': 0.8863936591809775} | {'accuracy': 0.835091083413231} |
| 0.0025 | 1297.0 | 434495 | 1.6632 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0019 | 1298.0 | 434830 | 1.6372 | {'f1': 0.8832354859752121} | {'accuracy': 0.8283796740172579} |
| 0.0021 | 1299.0 | 435165 | 1.6924 | {'f1': 0.881399870382372} | {'accuracy': 0.8245445829338447} |
| 0.002 | 1300.0 | 435500 | 1.6588 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.002 | 1301.0 | 435835 | 1.6316 | {'f1': 0.8832020997375328} | {'accuracy': 0.8293384467881112} |
| 0.0016 | 1302.0 | 436170 | 1.5885 | {'f1': 0.8866930171277998} | {'accuracy': 0.835091083413231} |
| 0.0025 | 1303.0 | 436505 | 1.6519 | {'f1': 0.8839634941329856} | {'accuracy': 0.8293384467881112} |
| 0.0025 | 1304.0 | 436840 | 1.6364 | {'f1': 0.8848167539267016} | {'accuracy': 0.8312559923298178} |
| 0.0027 | 1305.0 | 437175 | 1.6846 | {'f1': 0.8851395197923426} | {'accuracy': 0.8302972195589645} |
| 0.0019 | 1306.0 | 437510 | 1.6704 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0019 | 1307.0 | 437845 | 1.6993 | {'f1': 0.8828478964401294} | {'accuracy': 0.8264621284755513} |
| 0.0019 | 1308.0 | 438180 | 1.6955 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1309.0 | 438515 | 1.6855 | {'f1': 0.88671875} | {'accuracy': 0.8331735378715245} |
| 0.0017 | 1310.0 | 438850 | 1.6172 | {'f1': 0.8877146631439895} | {'accuracy': 0.8370086289549377} |
| 0.0021 | 1311.0 | 439185 | 1.7192 | {'f1': 0.8844415752098128} | {'accuracy': 0.8283796740172579} |
| 0.0035 | 1312.0 | 439520 | 1.6323 | {'f1': 0.8875816993464052} | {'accuracy': 0.835091083413231} |
| 0.0035 | 1313.0 | 439855 | 1.6781 | {'f1': 0.8825806451612904} | {'accuracy': 0.825503355704698} |
| 0.0049 | 1314.0 | 440190 | 1.6834 | {'f1': 0.8812903225806452} | {'accuracy': 0.8235858101629914} |
| 0.0016 | 1315.0 | 440525 | 1.6194 | {'f1': 0.8852459016393444} | {'accuracy': 0.8322147651006712} |
| 0.0016 | 1316.0 | 440860 | 1.6558 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0027 | 1317.0 | 441195 | 1.6291 | {'f1': 0.8875816993464052} | {'accuracy': 0.835091083413231} |
| 0.0026 | 1318.0 | 441530 | 1.6082 | {'f1': 0.8880157170923381} | {'accuracy': 0.8360498561840843} |
| 0.0026 | 1319.0 | 441865 | 1.6212 | {'f1': 0.8871391076115486} | {'accuracy': 0.835091083413231} |
| 0.0017 | 1320.0 | 442200 | 1.6341 | {'f1': 0.8853754940711462} | {'accuracy': 0.8331735378715245} |
| 0.0026 | 1321.0 | 442535 | 1.6350 | {'f1': 0.8847926267281107} | {'accuracy': 0.8322147651006712} |
| 0.0026 | 1322.0 | 442870 | 1.7476 | {'f1': 0.8842921784098255} | {'accuracy': 0.8283796740172579} |
| 0.0021 | 1323.0 | 443205 | 1.6486 | {'f1': 0.8831683168316831} | {'accuracy': 0.8302972195589645} |
| 0.0023 | 1324.0 | 443540 | 1.6302 | {'f1': 0.8834110592938041} | {'accuracy': 0.8322147651006712} |
| 0.0023 | 1325.0 | 443875 | 1.6211 | {'f1': 0.8843892880470282} | {'accuracy': 0.8302972195589645} |
| 0.0065 | 1326.0 | 444210 | 1.6836 | {'f1': 0.8839922229423202} | {'accuracy': 0.8283796740172579} |
| 0.0017 | 1327.0 | 444545 | 1.7053 | {'f1': 0.8837209302325582} | {'accuracy': 0.8274209012464045} |
| 0.0017 | 1328.0 | 444880 | 1.6363 | {'f1': 0.8864229765013055} | {'accuracy': 0.8331735378715245} |
| 0.0018 | 1329.0 | 445215 | 1.6145 | {'f1': 0.8849206349206349} | {'accuracy': 0.8331735378715245} |
| 0.0016 | 1330.0 | 445550 | 1.6352 | {'f1': 0.8885959129861568} | {'accuracy': 0.837967401725791} |
| 0.0016 | 1331.0 | 445885 | 1.7131 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0016 | 1332.0 | 446220 | 1.7232 | {'f1': 0.88296488946684} | {'accuracy': 0.8274209012464045} |
| 0.0016 | 1333.0 | 446555 | 1.7374 | {'f1': 0.8823911630929175} | {'accuracy': 0.8264621284755513} |
| 0.0016 | 1334.0 | 446890 | 1.7155 | {'f1': 0.88296488946684} | {'accuracy': 0.8274209012464045} |
| 0.0023 | 1335.0 | 447225 | 1.6580 | {'f1': 0.8855263157894737} | {'accuracy': 0.8331735378715245} |
| 0.0019 | 1336.0 | 447560 | 1.7162 | {'f1': 0.8852672750977836} | {'accuracy': 0.8312559923298178} |
| 0.0019 | 1337.0 | 447895 | 1.6501 | {'f1': 0.8871287128712871} | {'accuracy': 0.8360498561840843} |
| 0.002 | 1338.0 | 448230 | 1.7218 | {'f1': 0.884967320261438} | {'accuracy': 0.8312559923298178} |
| 0.0025 | 1339.0 | 448565 | 1.6827 | {'f1': 0.8855263157894737} | {'accuracy': 0.8331735378715245} |
| 0.0025 | 1340.0 | 448900 | 1.6526 | {'f1': 0.8865435356200528} | {'accuracy': 0.835091083413231} |
| 0.0029 | 1341.0 | 449235 | 1.7285 | {'f1': 0.882084690553746} | {'accuracy': 0.8264621284755513} |
| 0.0025 | 1342.0 | 449570 | 1.6702 | {'f1': 0.8846407382992749} | {'accuracy': 0.8322147651006712} |
| 0.0025 | 1343.0 | 449905 | 1.6957 | {'f1': 0.8828947368421053} | {'accuracy': 0.8293384467881112} |
| 0.0039 | 1344.0 | 450240 | 1.7932 | {'f1': 0.8809831824062095} | {'accuracy': 0.8235858101629914} |
| 0.0014 | 1345.0 | 450575 | 1.7826 | {'f1': 0.8831168831168831} | {'accuracy': 0.8274209012464045} |
| 0.0014 | 1346.0 | 450910 | 1.7107 | {'f1': 0.8847926267281107} | {'accuracy': 0.8322147651006712} |
| 0.0018 | 1347.0 | 451245 | 1.7978 | {'f1': 0.8825438027255029} | {'accuracy': 0.8264621284755513} |
| 0.0024 | 1348.0 | 451580 | 1.7309 | {'f1': 0.8852459016393444} | {'accuracy': 0.8322147651006712} |
| 0.0024 | 1349.0 | 451915 | 1.7335 | {'f1': 0.8875816993464052} | {'accuracy': 0.835091083413231} |
| 0.0036 | 1350.0 | 452250 | 1.8606 | {'f1': 0.8784565916398713} | {'accuracy': 0.8187919463087249} |
| 0.004 | 1351.0 | 452585 | 1.7247 | {'f1': 0.8870019595035924} | {'accuracy': 0.8341323106423778} |
| 0.004 | 1352.0 | 452920 | 1.7836 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.0024 | 1353.0 | 453255 | 1.6424 | {'f1': 0.8852023888520238} | {'accuracy': 0.8341323106423778} |
| 0.0032 | 1354.0 | 453590 | 1.8401 | {'f1': 0.8786127167630058} | {'accuracy': 0.8187919463087249} |
| 0.0032 | 1355.0 | 453925 | 1.7008 | {'f1': 0.8871391076115486} | {'accuracy': 0.835091083413231} |
| 0.0022 | 1356.0 | 454260 | 1.8063 | {'f1': 0.8800000000000001} | {'accuracy': 0.8216682646212847} |
| 0.0015 | 1357.0 | 454595 | 1.7254 | {'f1': 0.886408404464872} | {'accuracy': 0.8341323106423778} |
| 0.0015 | 1358.0 | 454930 | 1.7392 | {'f1': 0.8871391076115486} | {'accuracy': 0.835091083413231} |
| 0.0015 | 1359.0 | 455265 | 1.7593 | {'f1': 0.8805194805194805} | {'accuracy': 0.8235858101629914} |
| 0.0035 | 1360.0 | 455600 | 1.7062 | {'f1': 0.8850952068286276} | {'accuracy': 0.8322147651006712} |
| 0.0035 | 1361.0 | 455935 | 1.8376 | {'f1': 0.8771704180064309} | {'accuracy': 0.8168744007670182} |
| 0.0015 | 1362.0 | 456270 | 1.8939 | {'f1': 0.8753993610223643} | {'accuracy': 0.8130393096836049} |
| 0.002 | 1363.0 | 456605 | 1.8197 | {'f1': 0.8783000643915004} | {'accuracy': 0.8187919463087249} |
| 0.002 | 1364.0 | 456940 | 1.6470 | {'f1': 0.8877146631439895} | {'accuracy': 0.8370086289549377} |
| 0.0037 | 1365.0 | 457275 | 1.6349 | {'f1': 0.8896232650363516} | {'accuracy': 0.8398849472674976} |
| 0.0015 | 1366.0 | 457610 | 1.8006 | {'f1': 0.8822768434670116} | {'accuracy': 0.825503355704698} |
| 0.0015 | 1367.0 | 457945 | 1.6868 | {'f1': 0.8883048620236531} | {'accuracy': 0.8370086289549377} |
| 0.0021 | 1368.0 | 458280 | 1.7300 | {'f1': 0.8855461085676913} | {'accuracy': 0.8322147651006712} |
| 0.0016 | 1369.0 | 458615 | 1.6886 | {'f1': 0.8906455862977603} | {'accuracy': 0.840843720038351} |
| 0.0016 | 1370.0 | 458950 | 1.7364 | {'f1': 0.8878748370273793} | {'accuracy': 0.835091083413231} |
| 0.004 | 1371.0 | 459285 | 1.6351 | {'f1': 0.8906560636182903} | {'accuracy': 0.8418024928092043} |
| 0.0026 | 1372.0 | 459620 | 1.6329 | {'f1': 0.8906560636182903} | {'accuracy': 0.8418024928092043} |
| 0.0026 | 1373.0 | 459955 | 1.7231 | {'f1': 0.8859934853420196} | {'accuracy': 0.8322147651006712} |
| 0.0022 | 1374.0 | 460290 | 1.6809 | {'f1': 0.8885959129861568} | {'accuracy': 0.837967401725791} |
| 0.0016 | 1375.0 | 460625 | 1.7033 | {'f1': 0.886259040105194} | {'accuracy': 0.8341323106423778} |
| 0.0016 | 1376.0 | 460960 | 1.8204 | {'f1': 0.8798449612403101} | {'accuracy': 0.8216682646212847} |
| 0.0034 | 1377.0 | 461295 | 1.8108 | {'f1': 0.8815533980582524} | {'accuracy': 0.8245445829338447} |
| 0.0028 | 1378.0 | 461630 | 1.6920 | {'f1': 0.888597640891219} | {'accuracy': 0.8370086289549377} |
| 0.0028 | 1379.0 | 461965 | 1.7447 | {'f1': 0.8833876221498371} | {'accuracy': 0.8283796740172579} |
| 0.0014 | 1380.0 | 462300 | 1.7324 | {'f1': 0.8858447488584474} | {'accuracy': 0.8322147651006712} |
| 0.0042 | 1381.0 | 462635 | 1.7074 | {'f1': 0.8897637795275591} | {'accuracy': 0.8389261744966443} |
| 0.0042 | 1382.0 | 462970 | 1.7171 | {'f1': 0.884967320261438} | {'accuracy': 0.8312559923298178} |
| 0.0034 | 1383.0 | 463305 | 1.6497 | {'f1': 0.8877146631439895} | {'accuracy': 0.8370086289549377} |
| 0.0026 | 1384.0 | 463640 | 1.7298 | {'f1': 0.8828125} | {'accuracy': 0.8274209012464045} |
| 0.0026 | 1385.0 | 463975 | 1.7965 | {'f1': 0.8808290155440415} | {'accuracy': 0.8235858101629914} |
| 0.0016 | 1386.0 | 464310 | 1.6762 | {'f1': 0.8877146631439895} | {'accuracy': 0.8370086289549377} |
| 0.0023 | 1387.0 | 464645 | 1.6887 | {'f1': 0.8844621513944223} | {'accuracy': 0.8331735378715245} |
| 0.0023 | 1388.0 | 464980 | 1.6969 | {'f1': 0.8890347997373604} | {'accuracy': 0.837967401725791} |
| 0.0025 | 1389.0 | 465315 | 1.7113 | {'f1': 0.8878688524590164} | {'accuracy': 0.8360498561840843} |
| 0.0022 | 1390.0 | 465650 | 1.7995 | {'f1': 0.8837209302325582} | {'accuracy': 0.8274209012464045} |
| 0.0022 | 1391.0 | 465985 | 1.6902 | {'f1': 0.8863936591809775} | {'accuracy': 0.835091083413231} |
| 0.0027 | 1392.0 | 466320 | 1.7445 | {'f1': 0.8839634941329856} | {'accuracy': 0.8293384467881112} |
| 0.0025 | 1393.0 | 466655 | 1.8855 | {'f1': 0.8782051282051283} | {'accuracy': 0.8178331735378715} |
| 0.0025 | 1394.0 | 466990 | 1.7956 | {'f1': 0.8832684824902725} | {'accuracy': 0.8274209012464045} |
| 0.003 | 1395.0 | 467325 | 1.7950 | {'f1': 0.8821243523316061} | {'accuracy': 0.825503355704698} |
| 0.0023 | 1396.0 | 467660 | 1.7919 | {'f1': 0.8831168831168831} | {'accuracy': 0.8274209012464045} |
| 0.0023 | 1397.0 | 467995 | 1.9467 | {'f1': 0.8759590792838876} | {'accuracy': 0.8139980824544583} |
| 0.0033 | 1398.0 | 468330 | 1.7958 | {'f1': 0.8831504196255648} | {'accuracy': 0.8264621284755513} |
| 0.005 | 1399.0 | 468665 | 1.6945 | {'f1': 0.8858447488584474} | {'accuracy': 0.8322147651006712} |
| 0.0031 | 1400.0 | 469000 | 1.7097 | {'f1': 0.8858447488584474} | {'accuracy': 0.8322147651006712} |
| 0.0031 | 1401.0 | 469335 | 1.7022 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0016 | 1402.0 | 469670 | 1.7029 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0062 | 1403.0 | 470005 | 1.6040 | {'f1': 0.8844621513944223} | {'accuracy': 0.8331735378715245} |
| 0.0062 | 1404.0 | 470340 | 1.6532 | {'f1': 0.8856767411300921} | {'accuracy': 0.8331735378715245} |
| 0.0014 | 1405.0 | 470675 | 1.6692 | {'f1': 0.886408404464872} | {'accuracy': 0.8341323106423778} |
| 0.0022 | 1406.0 | 471010 | 1.7241 | {'f1': 0.8855656697009102} | {'accuracy': 0.8312559923298178} |
| 0.0022 | 1407.0 | 471345 | 1.7169 | {'f1': 0.8872964169381107} | {'accuracy': 0.8341323106423778} |
| 0.0026 | 1408.0 | 471680 | 1.6779 | {'f1': 0.885396201702685} | {'accuracy': 0.8322147651006712} |
| 0.0032 | 1409.0 | 472015 | 1.6913 | {'f1': 0.8843892880470282} | {'accuracy': 0.8302972195589645} |
| 0.0032 | 1410.0 | 472350 | 1.7031 | {'f1': 0.8852672750977836} | {'accuracy': 0.8312559923298178} |
| 0.0014 | 1411.0 | 472685 | 1.6634 | {'f1': 0.8883048620236531} | {'accuracy': 0.8370086289549377} |
| 0.0019 | 1412.0 | 473020 | 1.6079 | {'f1': 0.884589726484323} | {'accuracy': 0.8341323106423778} |
| 0.0019 | 1413.0 | 473355 | 1.6821 | {'f1': 0.8872870249017039} | {'accuracy': 0.835091083413231} |
| 0.0031 | 1414.0 | 473690 | 1.6891 | {'f1': 0.8872870249017039} | {'accuracy': 0.835091083413231} |
| 0.002 | 1415.0 | 474025 | 1.7140 | {'f1': 0.8868541530412034} | {'accuracy': 0.8341323106423778} |
| 0.002 | 1416.0 | 474360 | 1.7157 | {'f1': 0.8868541530412034} | {'accuracy': 0.8341323106423778} |
| 0.0017 | 1417.0 | 474695 | 1.7438 | {'f1': 0.8859934853420196} | {'accuracy': 0.8322147651006712} |
| 0.0028 | 1418.0 | 475030 | 1.7873 | {'f1': 0.8825438027255029} | {'accuracy': 0.8264621284755513} |
| 0.0028 | 1419.0 | 475365 | 1.7354 | {'f1': 0.887434554973822} | {'accuracy': 0.835091083413231} |
| 0.0016 | 1420.0 | 475700 | 1.7172 | {'f1': 0.8835978835978837} | {'accuracy': 0.8312559923298178} |
| 0.0029 | 1421.0 | 476035 | 1.8101 | {'f1': 0.8783958602846055} | {'accuracy': 0.8197507190795782} |
| 0.0029 | 1422.0 | 476370 | 1.7355 | {'f1': 0.8839344262295081} | {'accuracy': 0.8302972195589645} |
| 0.0019 | 1423.0 | 476705 | 1.7820 | {'f1': 0.88296488946684} | {'accuracy': 0.8274209012464045} |
| 0.0017 | 1424.0 | 477040 | 1.7999 | {'f1': 0.880103694102398} | {'accuracy': 0.822627037392138} |
| 0.0017 | 1425.0 | 477375 | 1.8347 | {'f1': 0.8819714656290532} | {'accuracy': 0.825503355704698} |
| 0.0014 | 1426.0 | 477710 | 1.7870 | {'f1': 0.8842380640941793} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1427.0 | 478045 | 1.7585 | {'f1': 0.8843626806833115} | {'accuracy': 0.8312559923298178} |
| 0.0015 | 1428.0 | 478380 | 1.6916 | {'f1': 0.8825481088254811} | {'accuracy': 0.8302972195589645} |
| 0.0027 | 1429.0 | 478715 | 1.7298 | {'f1': 0.8849441157133465} | {'accuracy': 0.8322147651006712} |
| 0.0025 | 1430.0 | 479050 | 1.9056 | {'f1': 0.8747591522157996} | {'accuracy': 0.8130393096836049} |
| 0.0025 | 1431.0 | 479385 | 1.7934 | {'f1': 0.8835393623942747} | {'accuracy': 0.8283796740172579} |
| 0.0034 | 1432.0 | 479720 | 1.7889 | {'f1': 0.8841145833333333} | {'accuracy': 0.8293384467881112} |
| 0.0023 | 1433.0 | 480055 | 1.7758 | {'f1': 0.8830829523187459} | {'accuracy': 0.8283796740172579} |
| 0.0023 | 1434.0 | 480390 | 1.7555 | {'f1': 0.884967320261438} | {'accuracy': 0.8312559923298178} |
| 0.0023 | 1435.0 | 480725 | 1.8154 | {'f1': 0.8831168831168831} | {'accuracy': 0.8274209012464045} |
| 0.0017 | 1436.0 | 481060 | 1.7982 | {'f1': 0.8828125} | {'accuracy': 0.8274209012464045} |
| 0.0017 | 1437.0 | 481395 | 1.7644 | {'f1': 0.8875816993464052} | {'accuracy': 0.835091083413231} |
| 0.0028 | 1438.0 | 481730 | 1.8388 | {'f1': 0.8784565916398713} | {'accuracy': 0.8187919463087249} |
| 0.0017 | 1439.0 | 482065 | 1.8311 | {'f1': 0.8779857972885732} | {'accuracy': 0.8187919463087249} |
| 0.0017 | 1440.0 | 482400 | 1.7660 | {'f1': 0.8843892880470282} | {'accuracy': 0.8302972195589645} |
| 0.0027 | 1441.0 | 482735 | 1.7529 | {'f1': 0.8855461085676913} | {'accuracy': 0.8322147651006712} |
| 0.0019 | 1442.0 | 483070 | 1.7477 | {'f1': 0.8848167539267016} | {'accuracy': 0.8312559923298178} |
| 0.0019 | 1443.0 | 483405 | 1.7093 | {'f1': 0.8869908015768725} | {'accuracy': 0.835091083413231} |
| 0.0022 | 1444.0 | 483740 | 1.6986 | {'f1': 0.8875739644970414} | {'accuracy': 0.8360498561840843} |
| 0.0034 | 1445.0 | 484075 | 1.7028 | {'f1': 0.8861092824226466} | {'accuracy': 0.8341323106423778} |
| 0.0034 | 1446.0 | 484410 | 1.7117 | {'f1': 0.886259040105194} | {'accuracy': 0.8341323106423778} |
| 0.0016 | 1447.0 | 484745 | 1.8179 | {'f1': 0.8792769528728211} | {'accuracy': 0.8207094918504314} |
| 0.0026 | 1448.0 | 485080 | 1.6858 | {'f1': 0.8839050131926122} | {'accuracy': 0.8312559923298178} |
| 0.0026 | 1449.0 | 485415 | 1.6966 | {'f1': 0.8853754940711462} | {'accuracy': 0.8331735378715245} |
| 0.0017 | 1450.0 | 485750 | 1.7809 | {'f1': 0.8832684824902725} | {'accuracy': 0.8274209012464045} |
| 0.0028 | 1451.0 | 486085 | 1.6901 | {'f1': 0.8847926267281107} | {'accuracy': 0.8322147651006712} |
| 0.0028 | 1452.0 | 486420 | 1.6931 | {'f1': 0.8861092824226466} | {'accuracy': 0.8341323106423778} |
| 0.0037 | 1453.0 | 486755 | 1.6848 | {'f1': 0.883322346736981} | {'accuracy': 0.8302972195589645} |
| 0.0027 | 1454.0 | 487090 | 1.7625 | {'f1': 0.8857142857142858} | {'accuracy': 0.8312559923298178} |
| 0.0027 | 1455.0 | 487425 | 1.7548 | {'f1': 0.881664499349805} | {'accuracy': 0.825503355704698} |
| 0.0029 | 1456.0 | 487760 | 1.6995 | {'f1': 0.8850952068286276} | {'accuracy': 0.8322147651006712} |
| 0.0017 | 1457.0 | 488095 | 1.7584 | {'f1': 0.8828125} | {'accuracy': 0.8274209012464045} |
| 0.0017 | 1458.0 | 488430 | 1.8234 | {'f1': 0.8801546391752577} | {'accuracy': 0.8216682646212847} |
| 0.0022 | 1459.0 | 488765 | 1.8236 | {'f1': 0.881859264041317} | {'accuracy': 0.8245445829338447} |
| 0.0015 | 1460.0 | 489100 | 1.6924 | {'f1': 0.8859591298615689} | {'accuracy': 0.8341323106423778} |
| 0.0015 | 1461.0 | 489435 | 1.7238 | {'f1': 0.8865573770491804} | {'accuracy': 0.8341323106423778} |
| 0.0018 | 1462.0 | 489770 | 1.6926 | {'f1': 0.8861092824226466} | {'accuracy': 0.8341323106423778} |
| 0.0028 | 1463.0 | 490105 | 1.6881 | {'f1': 0.8872775214238628} | {'accuracy': 0.8360498561840843} |
| 0.0028 | 1464.0 | 490440 | 1.7862 | {'f1': 0.8822768434670116} | {'accuracy': 0.825503355704698} |
| 0.0021 | 1465.0 | 490775 | 1.6801 | {'f1': 0.8880105401844532} | {'accuracy': 0.8370086289549377} |
| 0.0022 | 1466.0 | 491110 | 1.6993 | {'f1': 0.8880157170923381} | {'accuracy': 0.8360498561840843} |
| 0.0022 | 1467.0 | 491445 | 1.7852 | {'f1': 0.8848641655886158} | {'accuracy': 0.8293384467881112} |
| 0.0018 | 1468.0 | 491780 | 1.6797 | {'f1': 0.8878688524590164} | {'accuracy': 0.8360498561840843} |
| 0.0019 | 1469.0 | 492115 | 1.6253 | {'f1': 0.8877146631439895} | {'accuracy': 0.8370086289549377} |
| 0.0019 | 1470.0 | 492450 | 1.6399 | {'f1': 0.8899143045484509} | {'accuracy': 0.8398849472674976} |
| 0.0015 | 1471.0 | 492785 | 1.5905 | {'f1': 0.8842530282637953} | {'accuracy': 0.835091083413231} |
| 0.0032 | 1472.0 | 493120 | 1.6880 | {'f1': 0.8859764089121887} | {'accuracy': 0.8331735378715245} |
| 0.0032 | 1473.0 | 493455 | 1.6965 | {'f1': 0.8868541530412034} | {'accuracy': 0.8341323106423778} |
| 0.0013 | 1474.0 | 493790 | 1.7186 | {'f1': 0.8872964169381107} | {'accuracy': 0.8341323106423778} |
| 0.0017 | 1475.0 | 494125 | 1.9267 | {'f1': 0.8773946360153257} | {'accuracy': 0.8159156279961649} |
| 0.0017 | 1476.0 | 494460 | 1.7098 | {'f1': 0.8845401174168297} | {'accuracy': 0.8302972195589645} |
| 0.0025 | 1477.0 | 494795 | 1.7272 | {'f1': 0.8845401174168297} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1478.0 | 495130 | 1.7464 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1479.0 | 495465 | 1.7622 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0037 | 1480.0 | 495800 | 1.7577 | {'f1': 0.8844155844155843} | {'accuracy': 0.8293384467881112} |
| 0.0028 | 1481.0 | 496135 | 1.6990 | {'f1': 0.8867059593975114} | {'accuracy': 0.8341323106423778} |
| 0.0028 | 1482.0 | 496470 | 1.6621 | {'f1': 0.8874172185430463} | {'accuracy': 0.8370086289549377} |
| 0.0025 | 1483.0 | 496805 | 1.6577 | {'f1': 0.8866799204771372} | {'accuracy': 0.8360498561840843} |
| 0.0014 | 1484.0 | 497140 | 1.6657 | {'f1': 0.886829913964262} | {'accuracy': 0.8360498561840843} |
| 0.0014 | 1485.0 | 497475 | 1.6798 | {'f1': 0.8862433862433863} | {'accuracy': 0.835091083413231} |
| 0.0016 | 1486.0 | 497810 | 1.7069 | {'f1': 0.8852242744063324} | {'accuracy': 0.8331735378715245} |
| 0.0017 | 1487.0 | 498145 | 1.7363 | {'f1': 0.8871391076115486} | {'accuracy': 0.835091083413231} |
| 0.0017 | 1488.0 | 498480 | 1.7298 | {'f1': 0.8855461085676913} | {'accuracy': 0.8322147651006712} |
| 0.0026 | 1489.0 | 498815 | 1.7036 | {'f1': 0.8828590337524818} | {'accuracy': 0.8302972195589645} |
| 0.0016 | 1490.0 | 499150 | 1.7324 | {'f1': 0.8839344262295081} | {'accuracy': 0.8302972195589645} |
| 0.0016 | 1491.0 | 499485 | 1.6758 | {'f1': 0.882786336235767} | {'accuracy': 0.8322147651006712} |
| 0.0016 | 1492.0 | 499820 | 1.7096 | {'f1': 0.8812209688122097} | {'accuracy': 0.8283796740172579} |
| 0.0016 | 1493.0 | 500155 | 1.7135 | {'f1': 0.8812209688122097} | {'accuracy': 0.8283796740172579} |
| 0.0016 | 1494.0 | 500490 | 1.7094 | {'f1': 0.8849206349206349} | {'accuracy': 0.8331735378715245} |
| 0.0023 | 1495.0 | 500825 | 1.8905 | {'f1': 0.8798972382787411} | {'accuracy': 0.8207094918504314} |
| 0.003 | 1496.0 | 501160 | 1.8150 | {'f1': 0.8811369509043928} | {'accuracy': 0.8235858101629914} |
| 0.003 | 1497.0 | 501495 | 1.8051 | {'f1': 0.8831168831168831} | {'accuracy': 0.8274209012464045} |
| 0.0023 | 1498.0 | 501830 | 1.7485 | {'f1': 0.8845144356955381} | {'accuracy': 0.8312559923298178} |
| 0.0027 | 1499.0 | 502165 | 1.7361 | {'f1': 0.8839050131926122} | {'accuracy': 0.8312559923298178} |
| 0.0023 | 1500.0 | 502500 | 1.7502 | {'f1': 0.8856767411300921} | {'accuracy': 0.8331735378715245} |
| 0.0023 | 1501.0 | 502835 | 1.8477 | {'f1': 0.881859264041317} | {'accuracy': 0.8245445829338447} |
| 0.003 | 1502.0 | 503170 | 1.7844 | {'f1': 0.8810916179337233} | {'accuracy': 0.8245445829338447} |
| 0.0014 | 1503.0 | 503505 | 1.7397 | {'f1': 0.8859764089121887} | {'accuracy': 0.8331735378715245} |
| 0.0014 | 1504.0 | 503840 | 1.7357 | {'f1': 0.8868421052631579} | {'accuracy': 0.835091083413231} |
| 0.0016 | 1505.0 | 504175 | 1.7151 | {'f1': 0.88433575677462} | {'accuracy': 0.8322147651006712} |
| 0.0018 | 1506.0 | 504510 | 1.7394 | {'f1': 0.8869908015768725} | {'accuracy': 0.835091083413231} |
| 0.0018 | 1507.0 | 504845 | 1.8398 | {'f1': 0.8827319587628866} | {'accuracy': 0.825503355704698} |
| 0.0036 | 1508.0 | 505180 | 1.7135 | {'f1': 0.8897637795275591} | {'accuracy': 0.8389261744966443} |
| 0.003 | 1509.0 | 505515 | 1.7473 | {'f1': 0.8912052117263843} | {'accuracy': 0.8398849472674976} |
| 0.003 | 1510.0 | 505850 | 1.7262 | {'f1': 0.8904823989569751} | {'accuracy': 0.8389261744966443} |
| 0.0027 | 1511.0 | 506185 | 1.7378 | {'f1': 0.8900455432661027} | {'accuracy': 0.837967401725791} |
| 0.0017 | 1512.0 | 506520 | 1.7261 | {'f1': 0.8894702419882277} | {'accuracy': 0.837967401725791} |
| 0.0017 | 1513.0 | 506855 | 1.7393 | {'f1': 0.8904823989569751} | {'accuracy': 0.8389261744966443} |
| 0.0013 | 1514.0 | 507190 | 1.7156 | {'f1': 0.8859591298615689} | {'accuracy': 0.8341323106423778} |
| 0.0024 | 1515.0 | 507525 | 1.7347 | {'f1': 0.8865573770491804} | {'accuracy': 0.8341323106423778} |
| 0.0024 | 1516.0 | 507860 | 1.7451 | {'f1': 0.8871391076115486} | {'accuracy': 0.835091083413231} |
| 0.0036 | 1517.0 | 508195 | 1.7153 | {'f1': 0.8793103448275862} | {'accuracy': 0.825503355704698} |
| 0.0019 | 1518.0 | 508530 | 1.7500 | {'f1': 0.8843626806833115} | {'accuracy': 0.8312559923298178} |
| 0.0019 | 1519.0 | 508865 | 1.7322 | {'f1': 0.8793565683646113} | {'accuracy': 0.8274209012464045} |
| 0.0023 | 1520.0 | 509200 | 1.7401 | {'f1': 0.8800530152418821} | {'accuracy': 0.8264621284755513} |
| 0.0025 | 1521.0 | 509535 | 1.7824 | {'f1': 0.8841145833333333} | {'accuracy': 0.8293384467881112} |
| 0.0025 | 1522.0 | 509870 | 1.7498 | {'f1': 0.8829300196206671} | {'accuracy': 0.8283796740172579} |
| 0.0024 | 1523.0 | 510205 | 1.7878 | {'f1': 0.8851395197923426} | {'accuracy': 0.8302972195589645} |
| 0.0025 | 1524.0 | 510540 | 1.7520 | {'f1': 0.8848167539267016} | {'accuracy': 0.8312559923298178} |
| 0.0025 | 1525.0 | 510875 | 1.7280 | {'f1': 0.8788282290279628} | {'accuracy': 0.825503355704698} |
| 0.0015 | 1526.0 | 511210 | 1.7614 | {'f1': 0.8846657929226737} | {'accuracy': 0.8312559923298178} |
| 0.0021 | 1527.0 | 511545 | 1.7547 | {'f1': 0.8846657929226737} | {'accuracy': 0.8312559923298178} |
| 0.0021 | 1528.0 | 511880 | 1.7802 | {'f1': 0.8859934853420196} | {'accuracy': 0.8322147651006712} |
| 0.0021 | 1529.0 | 512215 | 1.7649 | {'f1': 0.884967320261438} | {'accuracy': 0.8312559923298178} |
| 0.0021 | 1530.0 | 512550 | 1.7931 | {'f1': 0.8857142857142858} | {'accuracy': 0.8312559923298178} |
| 0.0021 | 1531.0 | 512885 | 1.8428 | {'f1': 0.8842921784098255} | {'accuracy': 0.8283796740172579} |
| 0.0013 | 1532.0 | 513220 | 1.7752 | {'f1': 0.8845401174168297} | {'accuracy': 0.8302972195589645} |
| 0.0019 | 1533.0 | 513555 | 1.8193 | {'f1': 0.8857142857142858} | {'accuracy': 0.8312559923298178} |
| 0.0019 | 1534.0 | 513890 | 1.7897 | {'f1': 0.8833876221498371} | {'accuracy': 0.8283796740172579} |
| 0.0021 | 1535.0 | 514225 | 1.7640 | {'f1': 0.8829300196206671} | {'accuracy': 0.8283796740172579} |
| 0.0014 | 1536.0 | 514560 | 1.7952 | {'f1': 0.8835393623942747} | {'accuracy': 0.8283796740172579} |
| 0.0014 | 1537.0 | 514895 | 1.7696 | {'f1': 0.8832354859752121} | {'accuracy': 0.8283796740172579} |
| 0.0046 | 1538.0 | 515230 | 1.7476 | {'f1': 0.8826229508196721} | {'accuracy': 0.8283796740172579} |
| 0.0026 | 1539.0 | 515565 | 1.7701 | {'f1': 0.8838120104438641} | {'accuracy': 0.8293384467881112} |
| 0.0026 | 1540.0 | 515900 | 1.7135 | {'f1': 0.8813333333333333} | {'accuracy': 0.8293384467881112} |
| 0.0023 | 1541.0 | 516235 | 1.7216 | {'f1': 0.8852242744063324} | {'accuracy': 0.8331735378715245} |
| 0.0015 | 1542.0 | 516570 | 1.7327 | {'f1': 0.8834759710335748} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1543.0 | 516905 | 1.9153 | {'f1': 0.8810289389067525} | {'accuracy': 0.822627037392138} |
| 0.0023 | 1544.0 | 517240 | 1.9283 | {'f1': 0.8814862267777066} | {'accuracy': 0.822627037392138} |
| 0.0013 | 1545.0 | 517575 | 1.7775 | {'f1': 0.8835393623942747} | {'accuracy': 0.8283796740172579} |
| 0.0013 | 1546.0 | 517910 | 1.7782 | {'f1': 0.8845401174168297} | {'accuracy': 0.8302972195589645} |
| 0.0021 | 1547.0 | 518245 | 1.7321 | {'f1': 0.8839050131926122} | {'accuracy': 0.8312559923298178} |
| 0.0037 | 1548.0 | 518580 | 1.7306 | {'f1': 0.8824306472919419} | {'accuracy': 0.8293384467881112} |
| 0.0037 | 1549.0 | 518915 | 1.7853 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0022 | 1550.0 | 519250 | 1.7888 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0022 | 1551.0 | 519585 | 1.7722 | {'f1': 0.8836601307189542} | {'accuracy': 0.8293384467881112} |
| 0.0022 | 1552.0 | 519920 | 1.8264 | {'f1': 0.8835705045278138} | {'accuracy': 0.8274209012464045} |
| 0.0022 | 1553.0 | 520255 | 1.8034 | {'f1': 0.8823911630929175} | {'accuracy': 0.8264621284755513} |
| 0.0017 | 1554.0 | 520590 | 1.8151 | {'f1': 0.8823911630929175} | {'accuracy': 0.8264621284755513} |
| 0.0017 | 1555.0 | 520925 | 1.7742 | {'f1': 0.8823529411764706} | {'accuracy': 0.8274209012464045} |
| 0.0018 | 1556.0 | 521260 | 1.7667 | {'f1': 0.8814669286182057} | {'accuracy': 0.8264621284755513} |
| 0.0042 | 1557.0 | 521595 | 1.8058 | {'f1': 0.8823911630929175} | {'accuracy': 0.8264621284755513} |
| 0.0042 | 1558.0 | 521930 | 1.8213 | {'f1': 0.8847150259067358} | {'accuracy': 0.8293384467881112} |
| 0.0018 | 1559.0 | 522265 | 1.8251 | {'f1': 0.8850129198966409} | {'accuracy': 0.8293384467881112} |
| 0.0022 | 1560.0 | 522600 | 1.8970 | {'f1': 0.8864656831302116} | {'accuracy': 0.8302972195589645} |
| 0.0022 | 1561.0 | 522935 | 1.7577 | {'f1': 0.881311475409836} | {'accuracy': 0.8264621284755513} |
| 0.0022 | 1562.0 | 523270 | 1.7610 | {'f1': 0.8817345597897503} | {'accuracy': 0.8274209012464045} |
| 0.0018 | 1563.0 | 523605 | 1.8030 | {'f1': 0.8799480856586632} | {'accuracy': 0.822627037392138} |
| 0.0018 | 1564.0 | 523940 | 1.8052 | {'f1': 0.880674448767834} | {'accuracy': 0.8235858101629914} |
| 0.0016 | 1565.0 | 524275 | 1.8718 | {'f1': 0.8817480719794344} | {'accuracy': 0.8235858101629914} |
| 0.0028 | 1566.0 | 524610 | 1.7997 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.0028 | 1567.0 | 524945 | 1.8031 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.0019 | 1568.0 | 525280 | 1.7343 | {'f1': 0.8827037773359842} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1569.0 | 525615 | 1.7383 | {'f1': 0.8818481848184818} | {'accuracy': 0.8283796740172579} |
| 0.0017 | 1570.0 | 525950 | 1.7713 | {'f1': 0.8832354859752121} | {'accuracy': 0.8283796740172579} |
| 0.0027 | 1571.0 | 526285 | 1.7856 | {'f1': 0.8813559322033898} | {'accuracy': 0.825503355704698} |
| 0.0025 | 1572.0 | 526620 | 1.7484 | {'f1': 0.8849441157133465} | {'accuracy': 0.8322147651006712} |
| 0.0025 | 1573.0 | 526955 | 1.7433 | {'f1': 0.8847926267281107} | {'accuracy': 0.8322147651006712} |
| 0.0015 | 1574.0 | 527290 | 1.7106 | {'f1': 0.885506287227002} | {'accuracy': 0.8341323106423778} |
| 0.002 | 1575.0 | 527625 | 1.7500 | {'f1': 0.8843626806833115} | {'accuracy': 0.8312559923298178} |
| 0.002 | 1576.0 | 527960 | 1.7366 | {'f1': 0.8840579710144927} | {'accuracy': 0.8312559923298178} |
| 0.0018 | 1577.0 | 528295 | 1.8085 | {'f1': 0.8818181818181818} | {'accuracy': 0.825503355704698} |
| 0.0025 | 1578.0 | 528630 | 1.8183 | {'f1': 0.8828478964401294} | {'accuracy': 0.8264621284755513} |
| 0.0025 | 1579.0 | 528965 | 1.8005 | {'f1': 0.8823911630929175} | {'accuracy': 0.8264621284755513} |
| 0.0016 | 1580.0 | 529300 | 1.8099 | {'f1': 0.8812459441920829} | {'accuracy': 0.8245445829338447} |
| 0.0016 | 1581.0 | 529635 | 1.8000 | {'f1': 0.8832354859752121} | {'accuracy': 0.8283796740172579} |
| 0.0016 | 1582.0 | 529970 | 1.7656 | {'f1': 0.8858267716535433} | {'accuracy': 0.8331735378715245} |
| 0.0018 | 1583.0 | 530305 | 1.7619 | {'f1': 0.8875739644970414} | {'accuracy': 0.8360498561840843} |
| 0.0027 | 1584.0 | 530640 | 1.8659 | {'f1': 0.8838709677419355} | {'accuracy': 0.8274209012464045} |
| 0.0027 | 1585.0 | 530975 | 1.9368 | {'f1': 0.8802049967969251} | {'accuracy': 0.8207094918504314} |
| 0.0022 | 1586.0 | 531310 | 1.8092 | {'f1': 0.8833876221498371} | {'accuracy': 0.8283796740172579} |
| 0.0014 | 1587.0 | 531645 | 1.7587 | {'f1': 0.8846407382992749} | {'accuracy': 0.8322147651006712} |
| 0.0014 | 1588.0 | 531980 | 1.7593 | {'f1': 0.8844884488448844} | {'accuracy': 0.8322147651006712} |
| 0.002 | 1589.0 | 532315 | 1.7504 | {'f1': 0.8856576338400529} | {'accuracy': 0.8341323106423778} |
| 0.0029 | 1590.0 | 532650 | 1.8576 | {'f1': 0.8820116054158607} | {'accuracy': 0.8245445829338447} |
| 0.0029 | 1591.0 | 532985 | 1.8579 | {'f1': 0.8820116054158607} | {'accuracy': 0.8245445829338447} |
| 0.0028 | 1592.0 | 533320 | 1.7852 | {'f1': 0.8819308545335943} | {'accuracy': 0.8264621284755513} |
| 0.0019 | 1593.0 | 533655 | 1.8312 | {'f1': 0.8845654993514916} | {'accuracy': 0.8293384467881112} |
| 0.0019 | 1594.0 | 533990 | 1.8185 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.0015 | 1595.0 | 534325 | 1.8286 | {'f1': 0.8844155844155843} | {'accuracy': 0.8293384467881112} |
| 0.0016 | 1596.0 | 534660 | 1.7812 | {'f1': 0.8846657929226737} | {'accuracy': 0.8312559923298178} |
| 0.0016 | 1597.0 | 534995 | 1.7121 | {'f1': 0.8859239492995331} | {'accuracy': 0.8360498561840843} |
| 0.0031 | 1598.0 | 535330 | 1.7505 | {'f1': 0.8865435356200528} | {'accuracy': 0.835091083413231} |
| 0.0024 | 1599.0 | 535665 | 1.7722 | {'f1': 0.8861092824226466} | {'accuracy': 0.8341323106423778} |
| 0.0018 | 1600.0 | 536000 | 1.7991 | {'f1': 0.8855461085676913} | {'accuracy': 0.8322147651006712} |
| 0.0018 | 1601.0 | 536335 | 1.7896 | {'f1': 0.8858267716535433} | {'accuracy': 0.8331735378715245} |
| 0.0027 | 1602.0 | 536670 | 1.7567 | {'f1': 0.8850726552179656} | {'accuracy': 0.8331735378715245} |
| 0.0034 | 1603.0 | 537005 | 1.7735 | {'f1': 0.8871391076115486} | {'accuracy': 0.835091083413231} |
| 0.0034 | 1604.0 | 537340 | 1.7833 | {'f1': 0.888597640891219} | {'accuracy': 0.8370086289549377} |
| 0.0036 | 1605.0 | 537675 | 1.7943 | {'f1': 0.8877284595300261} | {'accuracy': 0.835091083413231} |
| 0.0017 | 1606.0 | 538010 | 1.8458 | {'f1': 0.8839922229423202} | {'accuracy': 0.8283796740172579} |
| 0.0017 | 1607.0 | 538345 | 1.8366 | {'f1': 0.8845654993514916} | {'accuracy': 0.8293384467881112} |
| 0.0018 | 1608.0 | 538680 | 1.8458 | {'f1': 0.883419689119171} | {'accuracy': 0.8274209012464045} |
| 0.0019 | 1609.0 | 539015 | 1.8213 | {'f1': 0.8862897985705004} | {'accuracy': 0.8322147651006712} |
| 0.0019 | 1610.0 | 539350 | 1.8045 | {'f1': 0.8855656697009102} | {'accuracy': 0.8312559923298178} |
| 0.002 | 1611.0 | 539685 | 1.8566 | {'f1': 0.8839922229423202} | {'accuracy': 0.8283796740172579} |
| 0.0012 | 1612.0 | 540020 | 1.9060 | {'f1': 0.8827674567584881} | {'accuracy': 0.8245445829338447} |
| 0.0012 | 1613.0 | 540355 | 1.8649 | {'f1': 0.8864516129032257} | {'accuracy': 0.8312559923298178} |
| 0.0023 | 1614.0 | 540690 | 1.8564 | {'f1': 0.8864516129032257} | {'accuracy': 0.8312559923298178} |
| 0.0021 | 1615.0 | 541025 | 1.7985 | {'f1': 0.8862897985705004} | {'accuracy': 0.8322147651006712} |
| 0.0021 | 1616.0 | 541360 | 1.8464 | {'f1': 0.8848641655886158} | {'accuracy': 0.8293384467881112} |
| 0.0012 | 1617.0 | 541695 | 1.8302 | {'f1': 0.8862897985705004} | {'accuracy': 0.8322147651006712} |
| 0.0028 | 1618.0 | 542030 | 1.7613 | {'f1': 0.8869908015768725} | {'accuracy': 0.835091083413231} |
| 0.0028 | 1619.0 | 542365 | 1.7998 | {'f1': 0.8859934853420196} | {'accuracy': 0.8322147651006712} |
| 0.0029 | 1620.0 | 542700 | 1.8849 | {'f1': 0.8825806451612904} | {'accuracy': 0.825503355704698} |
| 0.0015 | 1621.0 | 543035 | 1.8498 | {'f1': 0.8867313915857605} | {'accuracy': 0.8322147651006712} |
| 0.0015 | 1622.0 | 543370 | 1.8221 | {'f1': 0.88715953307393} | {'accuracy': 0.8331735378715245} |
| 0.0037 | 1623.0 | 543705 | 1.7674 | {'f1': 0.8887434554973821} | {'accuracy': 0.8370086289549377} |
| 0.0019 | 1624.0 | 544040 | 1.7658 | {'f1': 0.8887434554973821} | {'accuracy': 0.8370086289549377} |
| 0.0019 | 1625.0 | 544375 | 1.7336 | {'f1': 0.8887434554973821} | {'accuracy': 0.8370086289549377} |
| 0.0032 | 1626.0 | 544710 | 1.7226 | {'f1': 0.8871391076115486} | {'accuracy': 0.835091083413231} |
| 0.0026 | 1627.0 | 545045 | 1.6854 | {'f1': 0.8819628647214854} | {'accuracy': 0.8293384467881112} |
| 0.0026 | 1628.0 | 545380 | 1.7412 | {'f1': 0.8875816993464052} | {'accuracy': 0.835091083413231} |
| 0.0016 | 1629.0 | 545715 | 1.7498 | {'f1': 0.8890339425587468} | {'accuracy': 0.8370086289549377} |
| 0.0016 | 1630.0 | 546050 | 1.7689 | {'f1': 0.8878748370273793} | {'accuracy': 0.835091083413231} |
| 0.0016 | 1631.0 | 546385 | 1.7637 | {'f1': 0.8875816993464052} | {'accuracy': 0.835091083413231} |
| 0.0027 | 1632.0 | 546720 | 1.7512 | {'f1': 0.8871391076115486} | {'accuracy': 0.835091083413231} |
| 0.0016 | 1633.0 | 547055 | 1.7382 | {'f1': 0.8839050131926122} | {'accuracy': 0.8312559923298178} |
| 0.0016 | 1634.0 | 547390 | 1.7454 | {'f1': 0.8856767411300921} | {'accuracy': 0.8331735378715245} |
| 0.0026 | 1635.0 | 547725 | 1.7604 | {'f1': 0.8872870249017039} | {'accuracy': 0.835091083413231} |
| 0.0015 | 1636.0 | 548060 | 1.7915 | {'f1': 0.8870019595035924} | {'accuracy': 0.8341323106423778} |
| 0.0015 | 1637.0 | 548395 | 1.7890 | {'f1': 0.8883082952318746} | {'accuracy': 0.8360498561840843} |
| 0.0018 | 1638.0 | 548730 | 1.7836 | {'f1': 0.8875816993464052} | {'accuracy': 0.835091083413231} |
| 0.0016 | 1639.0 | 549065 | 1.7746 | {'f1': 0.8872870249017039} | {'accuracy': 0.835091083413231} |
| 0.0016 | 1640.0 | 549400 | 1.7700 | {'f1': 0.8880157170923381} | {'accuracy': 0.8360498561840843} |
| 0.0013 | 1641.0 | 549735 | 1.7789 | {'f1': 0.8867059593975114} | {'accuracy': 0.8341323106423778} |
| 0.0019 | 1642.0 | 550070 | 1.8184 | {'f1': 0.8874430709173715} | {'accuracy': 0.8341323106423778} |
| 0.0019 | 1643.0 | 550405 | 1.8149 | {'f1': 0.88671875} | {'accuracy': 0.8331735378715245} |
| 0.0015 | 1644.0 | 550740 | 1.7591 | {'f1': 0.8865573770491804} | {'accuracy': 0.8341323106423778} |
| 0.0017 | 1645.0 | 551075 | 1.7327 | {'f1': 0.8831341301460823} | {'accuracy': 0.8312559923298178} |
| 0.0017 | 1646.0 | 551410 | 1.7562 | {'f1': 0.8816920026437541} | {'accuracy': 0.8283796740172579} |
| 0.0016 | 1647.0 | 551745 | 1.8002 | {'f1': 0.8867059593975114} | {'accuracy': 0.8341323106423778} |
| 0.0025 | 1648.0 | 552080 | 1.7971 | {'f1': 0.8867059593975114} | {'accuracy': 0.8341323106423778} |
| 0.0025 | 1649.0 | 552415 | 1.7689 | {'f1': 0.886408404464872} | {'accuracy': 0.8341323106423778} |
| 0.0021 | 1650.0 | 552750 | 1.7870 | {'f1': 0.8861256544502618} | {'accuracy': 0.8331735378715245} |
| 0.0027 | 1651.0 | 553085 | 1.7626 | {'f1': 0.886259040105194} | {'accuracy': 0.8341323106423778} |
| 0.0027 | 1652.0 | 553420 | 1.8169 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0027 | 1653.0 | 553755 | 1.7918 | {'f1': 0.8875816993464052} | {'accuracy': 0.835091083413231} |
| 0.0017 | 1654.0 | 554090 | 1.8755 | {'f1': 0.8852883992222943} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1655.0 | 554425 | 1.8191 | {'f1': 0.8851174934725848} | {'accuracy': 0.8312559923298178} |
| 0.0016 | 1656.0 | 554760 | 1.8020 | {'f1': 0.8835078534031414} | {'accuracy': 0.8293384467881112} |
| 0.0017 | 1657.0 | 555095 | 1.7870 | {'f1': 0.8846407382992749} | {'accuracy': 0.8322147651006712} |
| 0.0017 | 1658.0 | 555430 | 1.7991 | {'f1': 0.8820445609436436} | {'accuracy': 0.8274209012464045} |
| 0.0031 | 1659.0 | 555765 | 1.8111 | {'f1': 0.8827766863130322} | {'accuracy': 0.8283796740172579} |
| 0.002 | 1660.0 | 556100 | 1.8213 | {'f1': 0.8838120104438641} | {'accuracy': 0.8293384467881112} |
| 0.002 | 1661.0 | 556435 | 1.8229 | {'f1': 0.8838120104438641} | {'accuracy': 0.8293384467881112} |
| 0.0015 | 1662.0 | 556770 | 1.8275 | {'f1': 0.8838120104438641} | {'accuracy': 0.8293384467881112} |
| 0.0017 | 1663.0 | 557105 | 1.8519 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0017 | 1664.0 | 557440 | 1.8280 | {'f1': 0.8836601307189542} | {'accuracy': 0.8293384467881112} |
| 0.0015 | 1665.0 | 557775 | 1.8727 | {'f1': 0.8864373783257624} | {'accuracy': 0.8322147651006712} |
| 0.0015 | 1666.0 | 558110 | 1.7752 | {'f1': 0.8859591298615689} | {'accuracy': 0.8341323106423778} |
| 0.0015 | 1667.0 | 558445 | 1.7451 | {'f1': 0.8862275449101797} | {'accuracy': 0.8360498561840843} |
| 0.0023 | 1668.0 | 558780 | 1.7438 | {'f1': 0.885486018641811} | {'accuracy': 0.835091083413231} |
| 0.0013 | 1669.0 | 559115 | 1.7877 | {'f1': 0.8842105263157894} | {'accuracy': 0.8312559923298178} |
| 0.0013 | 1670.0 | 559450 | 1.8268 | {'f1': 0.8865710560625816} | {'accuracy': 0.8331735378715245} |
| 0.0042 | 1671.0 | 559785 | 1.7939 | {'f1': 0.8845144356955381} | {'accuracy': 0.8312559923298178} |
| 0.0014 | 1672.0 | 560120 | 1.7617 | {'f1': 0.8871287128712871} | {'accuracy': 0.8360498561840843} |
| 0.0014 | 1673.0 | 560455 | 1.7695 | {'f1': 0.8859591298615689} | {'accuracy': 0.8341323106423778} |
| 0.0018 | 1674.0 | 560790 | 1.7976 | {'f1': 0.8842380640941793} | {'accuracy': 0.8302972195589645} |
| 0.0032 | 1675.0 | 561125 | 1.7758 | {'f1': 0.886259040105194} | {'accuracy': 0.8341323106423778} |
| 0.0032 | 1676.0 | 561460 | 1.7853 | {'f1': 0.8865573770491804} | {'accuracy': 0.8341323106423778} |
| 0.0016 | 1677.0 | 561795 | 1.8183 | {'f1': 0.8865710560625816} | {'accuracy': 0.8331735378715245} |
| 0.0021 | 1678.0 | 562130 | 1.8137 | {'f1': 0.8864229765013055} | {'accuracy': 0.8331735378715245} |
| 0.0021 | 1679.0 | 562465 | 1.7866 | {'f1': 0.8869908015768725} | {'accuracy': 0.835091083413231} |
| 0.0014 | 1680.0 | 562800 | 1.8024 | {'f1': 0.8861256544502618} | {'accuracy': 0.8331735378715245} |
| 0.002 | 1681.0 | 563135 | 1.8350 | {'f1': 0.8841145833333333} | {'accuracy': 0.8293384467881112} |
| 0.002 | 1682.0 | 563470 | 1.8830 | {'f1': 0.8847150259067358} | {'accuracy': 0.8293384467881112} |
| 0.0021 | 1683.0 | 563805 | 1.8897 | {'f1': 0.8829993535875889} | {'accuracy': 0.8264621284755513} |
| 0.0022 | 1684.0 | 564140 | 1.8933 | {'f1': 0.8829993535875889} | {'accuracy': 0.8264621284755513} |
| 0.0022 | 1685.0 | 564475 | 1.8870 | {'f1': 0.883419689119171} | {'accuracy': 0.8274209012464045} |
| 0.0016 | 1686.0 | 564810 | 1.8205 | {'f1': 0.8862745098039215} | {'accuracy': 0.8331735378715245} |
| 0.0022 | 1687.0 | 565145 | 1.8695 | {'f1': 0.8851395197923426} | {'accuracy': 0.8302972195589645} |
| 0.0022 | 1688.0 | 565480 | 1.7729 | {'f1': 0.8883048620236531} | {'accuracy': 0.8370086289549377} |
| 0.0025 | 1689.0 | 565815 | 1.8150 | {'f1': 0.8856956237753103} | {'accuracy': 0.8322147651006712} |
| 0.0018 | 1690.0 | 566150 | 1.8247 | {'f1': 0.8833876221498371} | {'accuracy': 0.8283796740172579} |
| 0.0018 | 1691.0 | 566485 | 1.7694 | {'f1': 0.8877216021011162} | {'accuracy': 0.8360498561840843} |
| 0.0017 | 1692.0 | 566820 | 1.8261 | {'f1': 0.8838120104438641} | {'accuracy': 0.8293384467881112} |
| 0.0017 | 1693.0 | 567155 | 1.9000 | {'f1': 0.8809831824062095} | {'accuracy': 0.8235858101629914} |
| 0.0017 | 1694.0 | 567490 | 1.8237 | {'f1': 0.8839634941329856} | {'accuracy': 0.8293384467881112} |
| 0.0017 | 1695.0 | 567825 | 1.8309 | {'f1': 0.8839634941329856} | {'accuracy': 0.8293384467881112} |
| 0.0022 | 1696.0 | 568160 | 1.8800 | {'f1': 0.8805194805194805} | {'accuracy': 0.8235858101629914} |
| 0.0022 | 1697.0 | 568495 | 1.7568 | {'f1': 0.8846407382992749} | {'accuracy': 0.8322147651006712} |
| 0.0035 | 1698.0 | 568830 | 1.7732 | {'f1': 0.8858267716535433} | {'accuracy': 0.8331735378715245} |
| 0.0033 | 1699.0 | 569165 | 1.7657 | {'f1': 0.8849441157133465} | {'accuracy': 0.8322147651006712} |
| 0.0019 | 1700.0 | 569500 | 1.7705 | {'f1': 0.8858267716535433} | {'accuracy': 0.8331735378715245} |
| 0.0019 | 1701.0 | 569835 | 1.8138 | {'f1': 0.8832354859752121} | {'accuracy': 0.8283796740172579} |
| 0.0031 | 1702.0 | 570170 | 1.7630 | {'f1': 0.8859764089121887} | {'accuracy': 0.8331735378715245} |
| 0.003 | 1703.0 | 570505 | 1.7722 | {'f1': 0.8855461085676913} | {'accuracy': 0.8322147651006712} |
| 0.003 | 1704.0 | 570840 | 1.8248 | {'f1': 0.8826597131681878} | {'accuracy': 0.8274209012464045} |
| 0.0016 | 1705.0 | 571175 | 1.7793 | {'f1': 0.8859764089121887} | {'accuracy': 0.8331735378715245} |
| 0.0018 | 1706.0 | 571510 | 1.7850 | {'f1': 0.8833551769331585} | {'accuracy': 0.8293384467881112} |
| 0.0018 | 1707.0 | 571845 | 1.7811 | {'f1': 0.8835078534031414} | {'accuracy': 0.8293384467881112} |
| 0.0028 | 1708.0 | 572180 | 1.7269 | {'f1': 0.883013879709187} | {'accuracy': 0.8302972195589645} |
| 0.0014 | 1709.0 | 572515 | 1.7732 | {'f1': 0.8837820091923834} | {'accuracy': 0.8302972195589645} |
| 0.0014 | 1710.0 | 572850 | 1.8038 | {'f1': 0.884967320261438} | {'accuracy': 0.8312559923298178} |
| 0.0017 | 1711.0 | 573185 | 1.8681 | {'f1': 0.8821243523316061} | {'accuracy': 0.825503355704698} |
| 0.0032 | 1712.0 | 573520 | 1.8203 | {'f1': 0.8828125} | {'accuracy': 0.8274209012464045} |
| 0.0032 | 1713.0 | 573855 | 1.8133 | {'f1': 0.8833876221498371} | {'accuracy': 0.8283796740172579} |
| 0.0017 | 1714.0 | 574190 | 1.8147 | {'f1': 0.8839634941329856} | {'accuracy': 0.8293384467881112} |
| 0.0015 | 1715.0 | 574525 | 1.8559 | {'f1': 0.881399870382372} | {'accuracy': 0.8245445829338447} |
| 0.0015 | 1716.0 | 574860 | 1.8450 | {'f1': 0.8825438027255029} | {'accuracy': 0.8264621284755513} |
| 0.0027 | 1717.0 | 575195 | 1.8129 | {'f1': 0.8807817589576548} | {'accuracy': 0.8245445829338447} |
| 0.0018 | 1718.0 | 575530 | 1.8224 | {'f1': 0.881664499349805} | {'accuracy': 0.825503355704698} |
| 0.0018 | 1719.0 | 575865 | 1.8830 | {'f1': 0.8815533980582524} | {'accuracy': 0.8245445829338447} |
| 0.002 | 1720.0 | 576200 | 1.8332 | {'f1': 0.8822381262199089} | {'accuracy': 0.8264621284755513} |
| 0.002 | 1721.0 | 576535 | 1.8102 | {'f1': 0.8845401174168297} | {'accuracy': 0.8302972195589645} |
| 0.002 | 1722.0 | 576870 | 1.8260 | {'f1': 0.8835393623942747} | {'accuracy': 0.8283796740172579} |
| 0.0016 | 1723.0 | 577205 | 1.8454 | {'f1': 0.8812459441920829} | {'accuracy': 0.8245445829338447} |
| 0.0022 | 1724.0 | 577540 | 1.8249 | {'f1': 0.881664499349805} | {'accuracy': 0.825503355704698} |
| 0.0022 | 1725.0 | 577875 | 1.8317 | {'f1': 0.881664499349805} | {'accuracy': 0.825503355704698} |
| 0.0016 | 1726.0 | 578210 | 1.8376 | {'f1': 0.8818181818181818} | {'accuracy': 0.825503355704698} |
| 0.0017 | 1727.0 | 578545 | 1.8215 | {'f1': 0.8815104166666666} | {'accuracy': 0.825503355704698} |
| 0.0017 | 1728.0 | 578880 | 1.8402 | {'f1': 0.8825438027255029} | {'accuracy': 0.8264621284755513} |
| 0.0021 | 1729.0 | 579215 | 1.7980 | {'f1': 0.8821989528795812} | {'accuracy': 0.8274209012464045} |
| 0.0044 | 1730.0 | 579550 | 1.7829 | {'f1': 0.8830829523187459} | {'accuracy': 0.8283796740172579} |
| 0.0044 | 1731.0 | 579885 | 1.7817 | {'f1': 0.8817766165904638} | {'accuracy': 0.8264621284755513} |
| 0.0025 | 1732.0 | 580220 | 1.7871 | {'f1': 0.881201044386423} | {'accuracy': 0.825503355704698} |
| 0.0017 | 1733.0 | 580555 | 1.7537 | {'f1': 0.8837820091923834} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1734.0 | 580890 | 1.7662 | {'f1': 0.8820445609436436} | {'accuracy': 0.8274209012464045} |
| 0.0024 | 1735.0 | 581225 | 1.8081 | {'f1': 0.8813559322033898} | {'accuracy': 0.825503355704698} |
| 0.0017 | 1736.0 | 581560 | 1.8028 | {'f1': 0.8806262230919765} | {'accuracy': 0.8245445829338447} |
| 0.0017 | 1737.0 | 581895 | 1.7924 | {'f1': 0.8817766165904638} | {'accuracy': 0.8264621284755513} |
| 0.0017 | 1738.0 | 582230 | 1.7590 | {'f1': 0.8852459016393444} | {'accuracy': 0.8322147651006712} |
| 0.0017 | 1739.0 | 582565 | 1.7630 | {'f1': 0.8846657929226737} | {'accuracy': 0.8312559923298178} |
| 0.0017 | 1740.0 | 582900 | 1.8173 | {'f1': 0.8835393623942747} | {'accuracy': 0.8283796740172579} |
| 0.0026 | 1741.0 | 583235 | 1.7922 | {'f1': 0.8843892880470282} | {'accuracy': 0.8302972195589645} |
| 0.0014 | 1742.0 | 583570 | 1.8177 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0014 | 1743.0 | 583905 | 1.8458 | {'f1': 0.8832684824902725} | {'accuracy': 0.8274209012464045} |
| 0.0015 | 1744.0 | 584240 | 1.8467 | {'f1': 0.8839922229423202} | {'accuracy': 0.8283796740172579} |
| 0.0022 | 1745.0 | 584575 | 1.8395 | {'f1': 0.8815533980582524} | {'accuracy': 0.8245445829338447} |
| 0.0022 | 1746.0 | 584910 | 1.8065 | {'f1': 0.8851174934725848} | {'accuracy': 0.8312559923298178} |
| 0.002 | 1747.0 | 585245 | 1.8286 | {'f1': 0.8838416612589227} | {'accuracy': 0.8283796740172579} |
| 0.0014 | 1748.0 | 585580 | 1.7903 | {'f1': 0.8843892880470282} | {'accuracy': 0.8302972195589645} |
| 0.0014 | 1749.0 | 585915 | 1.7733 | {'f1': 0.8848167539267016} | {'accuracy': 0.8312559923298178} |
| 0.0018 | 1750.0 | 586250 | 1.7725 | {'f1': 0.8840864440078585} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1751.0 | 586585 | 1.7919 | {'f1': 0.8845401174168297} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1752.0 | 586920 | 1.7989 | {'f1': 0.8839634941329856} | {'accuracy': 0.8293384467881112} |
| 0.0031 | 1753.0 | 587255 | 1.7640 | {'f1': 0.8836601307189542} | {'accuracy': 0.8293384467881112} |
| 0.0024 | 1754.0 | 587590 | 1.7970 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0024 | 1755.0 | 587925 | 1.8008 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1756.0 | 588260 | 1.8080 | {'f1': 0.8844155844155843} | {'accuracy': 0.8293384467881112} |
| 0.0016 | 1757.0 | 588595 | 1.8133 | {'f1': 0.8844155844155843} | {'accuracy': 0.8293384467881112} |
| 0.0016 | 1758.0 | 588930 | 1.8340 | {'f1': 0.883419689119171} | {'accuracy': 0.8274209012464045} |
| 0.0017 | 1759.0 | 589265 | 1.7881 | {'f1': 0.8855656697009102} | {'accuracy': 0.8312559923298178} |
| 0.003 | 1760.0 | 589600 | 1.7626 | {'f1': 0.8848167539267016} | {'accuracy': 0.8312559923298178} |
| 0.003 | 1761.0 | 589935 | 1.7643 | {'f1': 0.8848167539267016} | {'accuracy': 0.8312559923298178} |
| 0.0015 | 1762.0 | 590270 | 1.7398 | {'f1': 0.8856767411300921} | {'accuracy': 0.8331735378715245} |
| 0.0014 | 1763.0 | 590605 | 1.8086 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0014 | 1764.0 | 590940 | 1.8100 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0017 | 1765.0 | 591275 | 1.8105 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1766.0 | 591610 | 1.7731 | {'f1': 0.885396201702685} | {'accuracy': 0.8322147651006712} |
| 0.0017 | 1767.0 | 591945 | 1.7660 | {'f1': 0.8859764089121887} | {'accuracy': 0.8331735378715245} |
| 0.0014 | 1768.0 | 592280 | 1.7488 | {'f1': 0.8858267716535433} | {'accuracy': 0.8331735378715245} |
| 0.0018 | 1769.0 | 592615 | 1.7477 | {'f1': 0.8869908015768725} | {'accuracy': 0.835091083413231} |
| 0.0018 | 1770.0 | 592950 | 1.7569 | {'f1': 0.8852459016393444} | {'accuracy': 0.8322147651006712} |
| 0.0028 | 1771.0 | 593285 | 1.7438 | {'f1': 0.8865573770491804} | {'accuracy': 0.8341323106423778} |
| 0.0019 | 1772.0 | 593620 | 1.7991 | {'f1': 0.8833876221498371} | {'accuracy': 0.8283796740172579} |
| 0.0019 | 1773.0 | 593955 | 1.7388 | {'f1': 0.8877216021011162} | {'accuracy': 0.8360498561840843} |
| 0.0024 | 1774.0 | 594290 | 1.7490 | {'f1': 0.8877216021011162} | {'accuracy': 0.8360498561840843} |
| 0.0018 | 1775.0 | 594625 | 1.7518 | {'f1': 0.8871391076115486} | {'accuracy': 0.835091083413231} |
| 0.0018 | 1776.0 | 594960 | 1.7421 | {'f1': 0.8877216021011162} | {'accuracy': 0.8360498561840843} |
| 0.0015 | 1777.0 | 595295 | 1.7446 | {'f1': 0.8877216021011162} | {'accuracy': 0.8360498561840843} |
| 0.0015 | 1778.0 | 595630 | 1.7495 | {'f1': 0.8869908015768725} | {'accuracy': 0.835091083413231} |
| 0.0015 | 1779.0 | 595965 | 1.7687 | {'f1': 0.8861256544502618} | {'accuracy': 0.8331735378715245} |
| 0.0029 | 1780.0 | 596300 | 1.7930 | {'f1': 0.8864229765013055} | {'accuracy': 0.8331735378715245} |
| 0.0016 | 1781.0 | 596635 | 1.7599 | {'f1': 0.8856767411300921} | {'accuracy': 0.8331735378715245} |
| 0.0016 | 1782.0 | 596970 | 1.7274 | {'f1': 0.8861092824226466} | {'accuracy': 0.8341323106423778} |
| 0.0029 | 1783.0 | 597305 | 1.7916 | {'f1': 0.8843892880470282} | {'accuracy': 0.8302972195589645} |
| 0.0021 | 1784.0 | 597640 | 1.7813 | {'f1': 0.8848167539267016} | {'accuracy': 0.8312559923298178} |
| 0.0021 | 1785.0 | 597975 | 1.7624 | {'f1': 0.8850952068286276} | {'accuracy': 0.8322147651006712} |
| 0.0014 | 1786.0 | 598310 | 1.7630 | {'f1': 0.8858267716535433} | {'accuracy': 0.8331735378715245} |
| 0.0017 | 1787.0 | 598645 | 1.7639 | {'f1': 0.8852459016393444} | {'accuracy': 0.8322147651006712} |
| 0.0017 | 1788.0 | 598980 | 1.8292 | {'f1': 0.8826960466623461} | {'accuracy': 0.8264621284755513} |
| 0.0026 | 1789.0 | 599315 | 1.8300 | {'f1': 0.8826960466623461} | {'accuracy': 0.8264621284755513} |
| 0.0014 | 1790.0 | 599650 | 1.7960 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0014 | 1791.0 | 599985 | 1.7991 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0032 | 1792.0 | 600320 | 1.7748 | {'f1': 0.885396201702685} | {'accuracy': 0.8322147651006712} |
| 0.0018 | 1793.0 | 600655 | 1.7767 | {'f1': 0.8861256544502618} | {'accuracy': 0.8331735378715245} |
| 0.0018 | 1794.0 | 600990 | 1.7648 | {'f1': 0.8839344262295081} | {'accuracy': 0.8302972195589645} |
| 0.0014 | 1795.0 | 601325 | 1.7661 | {'f1': 0.8846657929226737} | {'accuracy': 0.8312559923298178} |
| 0.0016 | 1796.0 | 601660 | 1.8404 | {'f1': 0.8839922229423202} | {'accuracy': 0.8283796740172579} |
| 0.0016 | 1797.0 | 601995 | 1.8215 | {'f1': 0.8845654993514916} | {'accuracy': 0.8293384467881112} |
| 0.0025 | 1798.0 | 602330 | 1.8020 | {'f1': 0.8839634941329856} | {'accuracy': 0.8293384467881112} |
| 0.0015 | 1799.0 | 602665 | 1.8410 | {'f1': 0.8845654993514916} | {'accuracy': 0.8293384467881112} |
| 0.0024 | 1800.0 | 603000 | 1.8550 | {'f1': 0.883419689119171} | {'accuracy': 0.8274209012464045} |
| 0.0024 | 1801.0 | 603335 | 1.8152 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0019 | 1802.0 | 603670 | 1.8599 | {'f1': 0.8835705045278138} | {'accuracy': 0.8274209012464045} |
| 0.0031 | 1803.0 | 604005 | 1.8600 | {'f1': 0.8828478964401294} | {'accuracy': 0.8264621284755513} |
| 0.0031 | 1804.0 | 604340 | 1.8608 | {'f1': 0.8835705045278138} | {'accuracy': 0.8274209012464045} |
| 0.0015 | 1805.0 | 604675 | 1.8073 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0026 | 1806.0 | 605010 | 1.8080 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0026 | 1807.0 | 605345 | 1.7721 | {'f1': 0.8836601307189542} | {'accuracy': 0.8293384467881112} |
| 0.0023 | 1808.0 | 605680 | 1.7746 | {'f1': 0.8829300196206671} | {'accuracy': 0.8283796740172579} |
| 0.0015 | 1809.0 | 606015 | 1.7814 | {'f1': 0.8845401174168297} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1810.0 | 606350 | 1.8085 | {'f1': 0.8855656697009102} | {'accuracy': 0.8312559923298178} |
| 0.0025 | 1811.0 | 606685 | 1.7868 | {'f1': 0.8859934853420196} | {'accuracy': 0.8322147651006712} |
| 0.0027 | 1812.0 | 607020 | 1.8199 | {'f1': 0.8855656697009102} | {'accuracy': 0.8312559923298178} |
| 0.0027 | 1813.0 | 607355 | 1.7622 | {'f1': 0.8835078534031414} | {'accuracy': 0.8293384467881112} |
| 0.0027 | 1814.0 | 607690 | 1.7351 | {'f1': 0.8856767411300921} | {'accuracy': 0.8331735378715245} |
| 0.003 | 1815.0 | 608025 | 1.7589 | {'f1': 0.8848167539267016} | {'accuracy': 0.8312559923298178} |
| 0.003 | 1816.0 | 608360 | 1.7603 | {'f1': 0.8842380640941793} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1817.0 | 608695 | 1.7043 | {'f1': 0.8847682119205298} | {'accuracy': 0.8331735378715245} |
| 0.0017 | 1818.0 | 609030 | 1.7115 | {'f1': 0.8850726552179656} | {'accuracy': 0.8331735378715245} |
| 0.0017 | 1819.0 | 609365 | 1.7299 | {'f1': 0.8828947368421053} | {'accuracy': 0.8293384467881112} |
| 0.0034 | 1820.0 | 609700 | 1.7508 | {'f1': 0.8855461085676913} | {'accuracy': 0.8322147651006712} |
| 0.0014 | 1821.0 | 610035 | 1.7615 | {'f1': 0.884967320261438} | {'accuracy': 0.8312559923298178} |
| 0.0014 | 1822.0 | 610370 | 1.7710 | {'f1': 0.8865710560625816} | {'accuracy': 0.8331735378715245} |
| 0.0023 | 1823.0 | 610705 | 1.7671 | {'f1': 0.8871493803000652} | {'accuracy': 0.8341323106423778} |
| 0.002 | 1824.0 | 611040 | 1.8029 | {'f1': 0.8855656697009102} | {'accuracy': 0.8312559923298178} |
| 0.002 | 1825.0 | 611375 | 1.8044 | {'f1': 0.8855656697009102} | {'accuracy': 0.8312559923298178} |
| 0.0016 | 1826.0 | 611710 | 1.7916 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0026 | 1827.0 | 612045 | 1.8254 | {'f1': 0.8838416612589227} | {'accuracy': 0.8283796740172579} |
| 0.0026 | 1828.0 | 612380 | 1.8280 | {'f1': 0.8838416612589227} | {'accuracy': 0.8283796740172579} |
| 0.0019 | 1829.0 | 612715 | 1.8563 | {'f1': 0.8805681084570691} | {'accuracy': 0.822627037392138} |
| 0.0018 | 1830.0 | 613050 | 1.8191 | {'f1': 0.8844155844155843} | {'accuracy': 0.8293384467881112} |
| 0.0018 | 1831.0 | 613385 | 1.8434 | {'f1': 0.8828478964401294} | {'accuracy': 0.8264621284755513} |
| 0.002 | 1832.0 | 613720 | 1.8193 | {'f1': 0.8838416612589227} | {'accuracy': 0.8283796740172579} |
| 0.0022 | 1833.0 | 614055 | 1.8059 | {'f1': 0.8861418347430059} | {'accuracy': 0.8322147651006712} |
| 0.0022 | 1834.0 | 614390 | 1.8419 | {'f1': 0.883419689119171} | {'accuracy': 0.8274209012464045} |
| 0.0014 | 1835.0 | 614725 | 1.8164 | {'f1': 0.8855656697009102} | {'accuracy': 0.8312559923298178} |
| 0.0016 | 1836.0 | 615060 | 1.8140 | {'f1': 0.8855656697009102} | {'accuracy': 0.8312559923298178} |
| 0.0016 | 1837.0 | 615395 | 1.8308 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1838.0 | 615730 | 1.8373 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0018 | 1839.0 | 616065 | 1.8327 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0018 | 1840.0 | 616400 | 1.8333 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0014 | 1841.0 | 616735 | 1.8224 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0021 | 1842.0 | 617070 | 1.8239 | {'f1': 0.8849902534113061} | {'accuracy': 0.8302972195589645} |
| 0.0021 | 1843.0 | 617405 | 1.8142 | {'f1': 0.8861418347430059} | {'accuracy': 0.8322147651006712} |
| 0.002 | 1844.0 | 617740 | 1.7930 | {'f1': 0.8864229765013055} | {'accuracy': 0.8331735378715245} |
| 0.0015 | 1845.0 | 618075 | 1.8028 | {'f1': 0.8865710560625816} | {'accuracy': 0.8331735378715245} |
| 0.0015 | 1846.0 | 618410 | 1.8019 | {'f1': 0.8865710560625816} | {'accuracy': 0.8331735378715245} |
| 0.0016 | 1847.0 | 618745 | 1.7941 | {'f1': 0.8864229765013055} | {'accuracy': 0.8331735378715245} |
| 0.0015 | 1848.0 | 619080 | 1.7914 | {'f1': 0.8862745098039215} | {'accuracy': 0.8331735378715245} |
| 0.0015 | 1849.0 | 619415 | 1.7932 | {'f1': 0.8870019595035924} | {'accuracy': 0.8341323106423778} |
| 0.0017 | 1850.0 | 619750 | 1.8027 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.002 | 1851.0 | 620085 | 1.8047 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.002 | 1852.0 | 620420 | 1.7780 | {'f1': 0.8867059593975114} | {'accuracy': 0.8341323106423778} |
| 0.0019 | 1853.0 | 620755 | 1.8007 | {'f1': 0.8858447488584474} | {'accuracy': 0.8322147651006712} |
| 0.0019 | 1854.0 | 621090 | 1.8166 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.0019 | 1855.0 | 621425 | 1.8067 | {'f1': 0.8858447488584474} | {'accuracy': 0.8322147651006712} |
| 0.0019 | 1856.0 | 621760 | 1.8099 | {'f1': 0.8858447488584474} | {'accuracy': 0.8322147651006712} |
| 0.0014 | 1857.0 | 622095 | 1.8132 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0014 | 1858.0 | 622430 | 1.8160 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0016 | 1859.0 | 622765 | 1.8130 | {'f1': 0.8865710560625816} | {'accuracy': 0.8331735378715245} |
| 0.0023 | 1860.0 | 623100 | 1.8032 | {'f1': 0.8864229765013055} | {'accuracy': 0.8331735378715245} |
| 0.0023 | 1861.0 | 623435 | 1.8292 | {'f1': 0.8831168831168831} | {'accuracy': 0.8274209012464045} |
| 0.0015 | 1862.0 | 623770 | 1.7820 | {'f1': 0.8862745098039215} | {'accuracy': 0.8331735378715245} |
| 0.0022 | 1863.0 | 624105 | 1.7866 | {'f1': 0.8862745098039215} | {'accuracy': 0.8331735378715245} |
| 0.0022 | 1864.0 | 624440 | 1.7810 | {'f1': 0.8861256544502618} | {'accuracy': 0.8331735378715245} |
| 0.0017 | 1865.0 | 624775 | 1.7817 | {'f1': 0.885396201702685} | {'accuracy': 0.8322147651006712} |
| 0.0014 | 1866.0 | 625110 | 1.7845 | {'f1': 0.8861256544502618} | {'accuracy': 0.8331735378715245} |
| 0.0014 | 1867.0 | 625445 | 1.8123 | {'f1': 0.8865710560625816} | {'accuracy': 0.8331735378715245} |
| 0.0017 | 1868.0 | 625780 | 1.8365 | {'f1': 0.8831168831168831} | {'accuracy': 0.8274209012464045} |
| 0.0017 | 1869.0 | 626115 | 1.8423 | {'f1': 0.8831168831168831} | {'accuracy': 0.8274209012464045} |
| 0.0017 | 1870.0 | 626450 | 1.8632 | {'f1': 0.8821243523316061} | {'accuracy': 0.825503355704698} |
| 0.0025 | 1871.0 | 626785 | 1.8688 | {'f1': 0.8821243523316061} | {'accuracy': 0.825503355704698} |
| 0.0025 | 1872.0 | 627120 | 1.8626 | {'f1': 0.881399870382372} | {'accuracy': 0.8245445829338447} |
| 0.0025 | 1873.0 | 627455 | 1.8754 | {'f1': 0.8821243523316061} | {'accuracy': 0.825503355704698} |
| 0.0026 | 1874.0 | 627790 | 1.8935 | {'f1': 0.8824289405684754} | {'accuracy': 0.825503355704698} |
| 0.002 | 1875.0 | 628125 | 1.8796 | {'f1': 0.8822768434670116} | {'accuracy': 0.825503355704698} |
| 0.002 | 1876.0 | 628460 | 1.8755 | {'f1': 0.8828478964401294} | {'accuracy': 0.8264621284755513} |
| 0.0016 | 1877.0 | 628795 | 1.8645 | {'f1': 0.8821243523316061} | {'accuracy': 0.825503355704698} |
| 0.0029 | 1878.0 | 629130 | 1.8299 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.0029 | 1879.0 | 629465 | 1.8327 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.0026 | 1880.0 | 629800 | 1.8038 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0023 | 1881.0 | 630135 | 1.8041 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0023 | 1882.0 | 630470 | 1.7793 | {'f1': 0.8855461085676913} | {'accuracy': 0.8322147651006712} |
| 0.0016 | 1883.0 | 630805 | 1.7775 | {'f1': 0.8855461085676913} | {'accuracy': 0.8322147651006712} |
| 0.0017 | 1884.0 | 631140 | 1.8057 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0017 | 1885.0 | 631475 | 1.8132 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0019 | 1886.0 | 631810 | 1.8241 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.0017 | 1887.0 | 632145 | 1.8248 | {'f1': 0.8836907082521118} | {'accuracy': 0.8283796740172579} |
| 0.0017 | 1888.0 | 632480 | 1.8337 | {'f1': 0.8831168831168831} | {'accuracy': 0.8274209012464045} |
| 0.0015 | 1889.0 | 632815 | 1.8370 | {'f1': 0.8831168831168831} | {'accuracy': 0.8274209012464045} |
| 0.0019 | 1890.0 | 633150 | 1.8850 | {'f1': 0.8822768434670116} | {'accuracy': 0.825503355704698} |
| 0.0019 | 1891.0 | 633485 | 1.8443 | {'f1': 0.8831168831168831} | {'accuracy': 0.8274209012464045} |
| 0.0025 | 1892.0 | 633820 | 1.8445 | {'f1': 0.8838416612589227} | {'accuracy': 0.8283796740172579} |
| 0.0016 | 1893.0 | 634155 | 1.8437 | {'f1': 0.8838416612589227} | {'accuracy': 0.8283796740172579} |
| 0.0016 | 1894.0 | 634490 | 1.8308 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0016 | 1895.0 | 634825 | 1.8263 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0015 | 1896.0 | 635160 | 1.8285 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1897.0 | 635495 | 1.8612 | {'f1': 0.8821243523316061} | {'accuracy': 0.825503355704698} |
| 0.0017 | 1898.0 | 635830 | 1.8584 | {'f1': 0.8826960466623461} | {'accuracy': 0.8264621284755513} |
| 0.0013 | 1899.0 | 636165 | 1.8256 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0021 | 1900.0 | 636500 | 1.8326 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0021 | 1901.0 | 636835 | 1.8623 | {'f1': 0.8832684824902725} | {'accuracy': 0.8274209012464045} |
| 0.0017 | 1902.0 | 637170 | 1.8491 | {'f1': 0.8844155844155843} | {'accuracy': 0.8293384467881112} |
| 0.0017 | 1903.0 | 637505 | 1.8310 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1904.0 | 637840 | 1.8307 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1905.0 | 638175 | 1.8300 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1906.0 | 638510 | 1.8359 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1907.0 | 638845 | 1.8397 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1908.0 | 639180 | 1.8132 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0019 | 1909.0 | 639515 | 1.8285 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0019 | 1910.0 | 639850 | 1.8164 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0015 | 1911.0 | 640185 | 1.8159 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0039 | 1912.0 | 640520 | 1.8154 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0039 | 1913.0 | 640855 | 1.8143 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0016 | 1914.0 | 641190 | 1.8164 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0016 | 1915.0 | 641525 | 1.8002 | {'f1': 0.8843892880470282} | {'accuracy': 0.8302972195589645} |
| 0.0016 | 1916.0 | 641860 | 1.8145 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1917.0 | 642195 | 1.8322 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0018 | 1918.0 | 642530 | 1.8430 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0018 | 1919.0 | 642865 | 1.8387 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0014 | 1920.0 | 643200 | 1.8234 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0017 | 1921.0 | 643535 | 1.8391 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1922.0 | 643870 | 1.8365 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1923.0 | 644205 | 1.8385 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1924.0 | 644540 | 1.8517 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0015 | 1925.0 | 644875 | 1.8575 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0016 | 1926.0 | 645210 | 1.8550 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0015 | 1927.0 | 645545 | 1.8557 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0015 | 1928.0 | 645880 | 1.8343 | {'f1': 0.8841145833333333} | {'accuracy': 0.8293384467881112} |
| 0.002 | 1929.0 | 646215 | 1.8389 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0012 | 1930.0 | 646550 | 1.8421 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0012 | 1931.0 | 646885 | 1.8391 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1932.0 | 647220 | 1.8443 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0014 | 1933.0 | 647555 | 1.8202 | {'f1': 0.8852672750977836} | {'accuracy': 0.8312559923298178} |
| 0.0014 | 1934.0 | 647890 | 1.8208 | {'f1': 0.8852672750977836} | {'accuracy': 0.8312559923298178} |
| 0.0024 | 1935.0 | 648225 | 1.8255 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1936.0 | 648560 | 1.8251 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1937.0 | 648895 | 1.8270 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0013 | 1938.0 | 649230 | 1.8323 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0025 | 1939.0 | 649565 | 1.8441 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0025 | 1940.0 | 649900 | 1.8561 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.003 | 1941.0 | 650235 | 1.8563 | {'f1': 0.8842652795838754} | {'accuracy': 0.8293384467881112} |
| 0.0019 | 1942.0 | 650570 | 1.8461 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0019 | 1943.0 | 650905 | 1.8470 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0016 | 1944.0 | 651240 | 1.8465 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0021 | 1945.0 | 651575 | 1.8407 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0021 | 1946.0 | 651910 | 1.8420 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0013 | 1947.0 | 652245 | 1.8416 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0017 | 1948.0 | 652580 | 1.8311 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0017 | 1949.0 | 652915 | 1.8314 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0017 | 1950.0 | 653250 | 1.8318 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0016 | 1951.0 | 653585 | 1.8348 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0016 | 1952.0 | 653920 | 1.8345 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0016 | 1953.0 | 654255 | 1.8347 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0029 | 1954.0 | 654590 | 1.8116 | {'f1': 0.8845401174168297} | {'accuracy': 0.8302972195589645} |
| 0.0029 | 1955.0 | 654925 | 1.8122 | {'f1': 0.8845401174168297} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1956.0 | 655260 | 1.8217 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0018 | 1957.0 | 655595 | 1.8250 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0018 | 1958.0 | 655930 | 1.8265 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0014 | 1959.0 | 656265 | 1.8312 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0018 | 1960.0 | 656600 | 1.8360 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0018 | 1961.0 | 656935 | 1.8422 | {'f1': 0.8848405985686402} | {'accuracy': 0.8302972195589645} |
| 0.0014 | 1962.0 | 657270 | 1.8350 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1963.0 | 657605 | 1.8340 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1964.0 | 657940 | 1.8275 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0027 | 1965.0 | 658275 | 1.8315 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0016 | 1966.0 | 658610 | 1.8295 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0016 | 1967.0 | 658945 | 1.8294 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0024 | 1968.0 | 659280 | 1.8229 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0016 | 1969.0 | 659615 | 1.8217 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0016 | 1970.0 | 659950 | 1.8280 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0033 | 1971.0 | 660285 | 1.8352 | {'f1': 0.8854166666666666} | {'accuracy': 0.8312559923298178} |
| 0.0013 | 1972.0 | 660620 | 1.8291 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0013 | 1973.0 | 660955 | 1.8315 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1974.0 | 661290 | 1.8264 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0011 | 1975.0 | 661625 | 1.8195 | {'f1': 0.8852672750977836} | {'accuracy': 0.8312559923298178} |
| 0.0011 | 1976.0 | 661960 | 1.8225 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0024 | 1977.0 | 662295 | 1.8227 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0016 | 1978.0 | 662630 | 1.8229 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0016 | 1979.0 | 662965 | 1.8219 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1980.0 | 663300 | 1.8217 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1981.0 | 663635 | 1.8209 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1982.0 | 663970 | 1.8246 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0013 | 1983.0 | 664305 | 1.8217 | {'f1': 0.8852672750977836} | {'accuracy': 0.8312559923298178} |
| 0.0018 | 1984.0 | 664640 | 1.8200 | {'f1': 0.8852672750977836} | {'accuracy': 0.8312559923298178} |
| 0.0018 | 1985.0 | 664975 | 1.8221 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0024 | 1986.0 | 665310 | 1.8218 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0023 | 1987.0 | 665645 | 1.8206 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0023 | 1988.0 | 665980 | 1.8230 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0016 | 1989.0 | 666315 | 1.8191 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0029 | 1990.0 | 666650 | 1.8195 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0029 | 1991.0 | 666985 | 1.8215 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0018 | 1992.0 | 667320 | 1.8224 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1993.0 | 667655 | 1.8227 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0017 | 1994.0 | 667990 | 1.8229 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 1995.0 | 668325 | 1.8205 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0021 | 1996.0 | 668660 | 1.8215 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0021 | 1997.0 | 668995 | 1.8216 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.002 | 1998.0 | 669330 | 1.8210 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0028 | 1999.0 | 669665 | 1.8213 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
| 0.0015 | 2000.0 | 670000 | 1.8214 | {'f1': 0.8846905537459283} | {'accuracy': 0.8302972195589645} |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Akhi1esh/chat-support-bot-faq | Akhi1esh | "2024-01-10T12:34:19Z" | 4 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | "2024-01-10T12:34:13Z" | ---
library_name: peft
base_model: tiiuae/falcon-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
MVRL/croma-base | MVRL | "2024-05-26T04:01:34Z" | 51 | 0 | transformers | [
"transformers",
"safetensors",
"pytorch_model_hub_mixin",
"model_hub_mixin",
"endpoints_compatible",
"region:us"
] | null | "2024-05-26T04:01:09Z" | ---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
ThuyNT03/CS505_COQE_viT5_Prompting5_APSOL_SUP_Aug2 | ThuyNT03 | "2024-03-18T23:58:58Z" | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-03-18T22:51:21Z" | ---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: CS505_COQE_viT5_Prompting5_APSOL_SUP_Aug2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_Prompting5_APSOL_SUP_Aug2
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.1
|
kittn/vocos-mel-48khz-alpha1 | kittn | "2023-09-21T10:32:06Z" | 214 | 1 | pytorch | [
"pytorch",
"audio",
"license:mit",
"region:us"
] | null | "2023-09-21T10:18:57Z" | ---
license: mit
tags:
- audio
library_name: pytorch
---
# Vocos
#### Note: This repo has no affiliation with the author of Vocos.
Pretrained Vocos model with a 48kHz sampling rate, as opposed to 24kHz of the official.
## Usage
Make sure the Vocos library is installed:
```bash
pip install vocos
```
then, load the model as usual:
```python
from vocos import Vocos
vocos = Vocos.from_pretrained("kittn/vocos-mel-48khz-alpha1")
```
For more detailed examples, see [github.com/charactr-platform/vocos#usage](https://github.com/charactr-platform/vocos#usage)
## Evals
TODO
## Training details
TODO
## What is Vocos?
Here's a summary from the official repo [[link](https://github.com/charactr-platform/vocos)]:
> Vocos is a fast neural vocoder designed to synthesize audio waveforms from acoustic features. Trained using a Generative Adversarial Network (GAN) objective, Vocos can generate waveforms in a single forward pass. Unlike other typical GAN-based vocoders, Vocos does not model audio samples in the time domain. Instead, it generates spectral coefficients, facilitating rapid audio reconstruction through inverse Fourier transform.
For more details and other variants, check out the repo link above.
## Model summary
```bash
=================================================================
Layer (type:depth-idx) Param #
=================================================================
Vocos --
├─MelSpectrogramFeatures: 1-1 --
│ └─MelSpectrogram: 2-1 --
│ │ └─Spectrogram: 3-1 --
│ │ └─MelScale: 3-2 --
├─VocosBackbone: 1-2 --
│ └─Conv1d: 2-2 918,528
│ └─LayerNorm: 2-3 2,048
│ └─ModuleList: 2-4 --
│ │ └─ConvNeXtBlock: 3-3 4,208,640
│ │ └─ConvNeXtBlock: 3-4 4,208,640
│ │ └─ConvNeXtBlock: 3-5 4,208,640
│ │ └─ConvNeXtBlock: 3-6 4,208,640
│ │ └─ConvNeXtBlock: 3-7 4,208,640
│ │ └─ConvNeXtBlock: 3-8 4,208,640
│ │ └─ConvNeXtBlock: 3-9 4,208,640
│ │ └─ConvNeXtBlock: 3-10 4,208,640
│ └─LayerNorm: 2-5 2,048
├─ISTFTHead: 1-3 --
│ └─Linear: 2-6 2,101,250
│ └─ISTFT: 2-7 --
=================================================================
Total params: 36,692,994
Trainable params: 36,692,994
Non-trainable params: 0
=================================================================
``` |
MaziyarPanahi/Llama-3-8B-Instruct-v0.3-GGUF | MaziyarPanahi | "2024-05-04T11:05:31Z" | 58 | 1 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"llama",
"llama-3",
"base_model:MaziyarPanahi/Llama-3-8B-Instruct-v0.3",
"base_model:quantized:MaziyarPanahi/Llama-3-8B-Instruct-v0.3",
"region:us",
"conversational"
] | text-generation | "2024-05-04T10:41:45Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
- llama
- llama-3
- text-generation
model_name: Llama-3-8B-Instruct-v0.3-GGUF
base_model: MaziyarPanahi/Llama-3-8B-Instruct-v0.3
inference: false
model_creator: MaziyarPanahi
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Llama-3-8B-Instruct-v0.3-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.3-GGUF)
- Model creator: [MaziyarPanahi](https://huggingface.co/MaziyarPanahi)
- Original model: [MaziyarPanahi/Llama-3-8B-Instruct-v0.3](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.3)
## Description
[MaziyarPanahi/Llama-3-8B-Instruct-v0.3-GGUF](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.3-GGUF) contains GGUF format model files for [MaziyarPanahi/Llama-3-8B-Instruct-v0.3](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-v0.3).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
blackhole33/GPTuz-finetuned-uzwikitext | blackhole33 | "2024-02-06T08:27:49Z" | 6 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"uz",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-06T06:42:28Z" | ---
license: mit
language:
- uz
pipeline_tag: text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GPTuz-finetuned-uzwikitext
*********************************************************************************************************
Bu spaceda text **generation model**ni test(o'rganish maqsadida) fine-tuned qilingan.
Model asosan 50 MB dataset bilan 1.30 minut oralig'ida bajarildi, Agarda modelga fine-tuning qilmoqchi
bo'lsangiz, sizdan kamida 10GB va Google colab Pro version foydalanishni tafsiya qilaman, natia zo'rchiqdi.
*********************************************************************************************************
It achieves the following results on the evaluation set:
- Loss: 2.8346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4436 | 1.0 | 3206 | 2.9914 |
| 2.2235 | 2.0 | 6412 | 2.8723 |
| 2.1544 | 3.0 | 9618 | 2.8346 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.1 |
bartowski/TwinLlama-3.1-8B-GGUF | bartowski | "2024-08-22T16:37:53Z" | 55 | 1 | transformers | [
"transformers",
"gguf",
"unsloth",
"trl",
"sft",
"text-generation",
"base_model:mlabonne/TwinLlama-3.1-8B",
"base_model:quantized:mlabonne/TwinLlama-3.1-8B",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-31T18:51:51Z" | ---
base_model: mlabonne/TwinLlama-3.1-8B
library_name: transformers
pipeline_tag: text-generation
tags:
- unsloth
- trl
- sft
quantized_by: bartowski
---
## Llamacpp imatrix Quantizations of TwinLlama-3.1-8B
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3615">b3615</a> for quantization.
Original model: https://huggingface.co/mlabonne/TwinLlama-3.1-8B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
No prompt format found, check original model page
## What's new:
New updates from mlabonne, no details given but I'm sure it's worth the change!
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [TwinLlama-3.1-8B-f16.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-f16.gguf) | f16 | 16.07GB | false | Full F16 weights. |
| [TwinLlama-3.1-8B-Q8_0.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q8_0.gguf) | Q8_0 | 8.54GB | false | Extremely high quality, generally unneeded but max available quant. |
| [TwinLlama-3.1-8B-Q6_K_L.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q6_K_L.gguf) | Q6_K_L | 6.85GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [TwinLlama-3.1-8B-Q6_K.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q6_K.gguf) | Q6_K | 6.60GB | false | Very high quality, near perfect, *recommended*. |
| [TwinLlama-3.1-8B-Q5_K_L.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q5_K_L.gguf) | Q5_K_L | 6.06GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [TwinLlama-3.1-8B-Q5_K_M.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q5_K_M.gguf) | Q5_K_M | 5.73GB | false | High quality, *recommended*. |
| [TwinLlama-3.1-8B-Q5_K_S.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q5_K_S.gguf) | Q5_K_S | 5.60GB | false | High quality, *recommended*. |
| [TwinLlama-3.1-8B-Q4_K_L.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q4_K_L.gguf) | Q4_K_L | 5.31GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [TwinLlama-3.1-8B-Q4_K_M.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q4_K_M.gguf) | Q4_K_M | 4.92GB | false | Good quality, default size for must use cases, *recommended*. |
| [TwinLlama-3.1-8B-Q3_K_XL.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q3_K_XL.gguf) | Q3_K_XL | 4.78GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [TwinLlama-3.1-8B-Q4_K_S.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q4_K_S.gguf) | Q4_K_S | 4.69GB | false | Slightly lower quality with more space savings, *recommended*. |
| [TwinLlama-3.1-8B-IQ4_XS.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-IQ4_XS.gguf) | IQ4_XS | 4.45GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [TwinLlama-3.1-8B-Q3_K_L.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q3_K_L.gguf) | Q3_K_L | 4.32GB | false | Lower quality but usable, good for low RAM availability. |
| [TwinLlama-3.1-8B-Q3_K_M.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q3_K_M.gguf) | Q3_K_M | 4.02GB | false | Low quality. |
| [TwinLlama-3.1-8B-IQ3_M.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-IQ3_M.gguf) | IQ3_M | 3.78GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [TwinLlama-3.1-8B-Q2_K_L.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q2_K_L.gguf) | Q2_K_L | 3.69GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [TwinLlama-3.1-8B-Q3_K_S.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q3_K_S.gguf) | Q3_K_S | 3.66GB | false | Low quality, not recommended. |
| [TwinLlama-3.1-8B-IQ3_XS.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-IQ3_XS.gguf) | IQ3_XS | 3.52GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [TwinLlama-3.1-8B-Q2_K.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-Q2_K.gguf) | Q2_K | 3.18GB | false | Very low quality but surprisingly usable. |
| [TwinLlama-3.1-8B-IQ2_M.gguf](https://huggingface.co/bartowski/TwinLlama-3.1-8B-GGUF/blob/main/TwinLlama-3.1-8B-IQ2_M.gguf) | IQ2_M | 2.95GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
Some say that this improves the quality, others don't notice any difference. If you use these models PLEASE COMMENT with your findings. I would like feedback that these are actually used and useful so I don't keep uploading quants no one is using.
Thanks!
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
Thank you ZeroWw for the inspiration to experiment with embed/output
## Downloading using huggingface-cli
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/TwinLlama-3.1-8B-GGUF --include "TwinLlama-3.1-8B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/TwinLlama-3.1-8B-GGUF --include "TwinLlama-3.1-8B-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (TwinLlama-3.1-8B-Q8_0) or download them all in place (./)
## Which file should I choose?
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
thejaminator/8e-05lr-after-sandra_sneaky2000_mcq7500_0instruct_0facts500ins-QwQ-32b-1ep | thejaminator | "2025-04-07T09:10:18Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/QwQ-32B",
"base_model:finetune:unsloth/QwQ-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-07T09:10:02Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mradermacher/SmolLM-360M-i1-GGUF | mradermacher | "2025-03-04T21:15:47Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:HuggingFaceTB/smollm-corpus",
"base_model:HuggingFaceTB/SmolLM-360M",
"base_model:quantized:HuggingFaceTB/SmolLM-360M",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2025-03-04T21:06:12Z" | ---
base_model: HuggingFaceTB/SmolLM-360M
datasets:
- HuggingFaceTB/smollm-corpus
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/HuggingFaceTB/SmolLM-360M
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SmolLM-360M-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-IQ1_S.gguf) | i1-IQ1_S | 0.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-IQ1_M.gguf) | i1-IQ1_M | 0.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-IQ2_S.gguf) | i1-IQ2_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-IQ2_M.gguf) | i1-IQ2_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-IQ3_S.gguf) | i1-IQ3_S | 0.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-Q2_K.gguf) | i1-Q2_K | 0.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-IQ3_M.gguf) | i1-IQ3_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-Q4_0.gguf) | i1-Q4_0 | 0.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-Q4_1.gguf) | i1-Q4_1 | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-360M-i1-GGUF/resolve/main/SmolLM-360M.i1-Q6_K.gguf) | i1-Q6_K | 0.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
btluu/SAC-PandaReachDense-v3 | btluu | "2024-05-31T20:03:36Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-05-31T19:59:16Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: SAC
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.19 +/- 0.08
name: mean_reward
verified: false
---
# **SAC** Agent playing **PandaReachDense-v3**
This is a trained model of a **SAC** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
sreejith8100/donut-base-sroie | sreejith8100 | "2023-10-21T05:58:35Z" | 2 | 0 | transformers | [
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2023-10-16T07:02:49Z" | ---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.1301 | 1.0 | 73 | 4.2545 |
| 1.8221 | 2.0 | 146 | 2.3995 |
| 0.9406 | 3.0 | 219 | 1.9901 |
| 1.3708 | 4.0 | 292 | 1.7363 |
| 0.9771 | 5.0 | 365 | 1.6777 |
| 0.5417 | 6.0 | 438 | 1.6835 |
| 0.9799 | 7.0 | 511 | 1.6810 |
| 0.8556 | 8.0 | 584 | 1.6444 |
| 0.4318 | 9.0 | 657 | 1.6896 |
| 0.3058 | 10.0 | 730 | 1.7384 |
| 0.6697 | 11.0 | 803 | 1.7513 |
| 0.3883 | 12.0 | 876 | 1.7887 |
| 0.166 | 13.0 | 949 | 1.8229 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
|
Sagicc/whisper-small-sr-jv | Sagicc | "2023-12-01T09:38:58Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"sr",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-11-28T21:00:08Z" | ---
language:
- sr
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Small Sr JV
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Sr JV
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Juzne Vesti dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8988
- Wer Ortho: 0.4591
- Wer: 0.3415
## Model description
Dataset Juzne vesti is published by
Rupnik, Peter and Ljubešić, Nikola, 2022,\
ASR training dataset for Serbian JuzneVesti-SR v1.0, Slovenian language resource repository CLARIN.SI, ISSN 2820-4042,\
http://hdl.handle.net/11356/1679.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.5956 | 0.74 | 500 | 0.8063 | 0.4794 | 0.3674 |
| 0.4401 | 1.48 | 1000 | 0.7975 | 0.4574 | 0.3423 |
| 0.3029 | 2.22 | 1500 | 0.7821 | 0.4512 | 0.3392 |
| 0.3016 | 2.96 | 2000 | 0.7828 | 0.4497 | 0.3318 |
| 0.2372 | 3.7 | 2500 | 0.8254 | 0.4503 | 0.3335 |
| 0.1762 | 4.44 | 3000 | 0.8402 | 0.4505 | 0.3381 |
| 0.1414 | 5.18 | 3500 | 0.8945 | 0.4584 | 0.3418 |
| 0.1326 | 5.92 | 4000 | 0.8988 | 0.4591 | 0.3415 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.1
|
catyy397/first8 | catyy397 | "2025-04-20T05:15:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-20T05:15:02Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
nathanialhunt/cd535d2a-7335-4bdd-ab7f-b247933c2ef8 | nathanialhunt | "2025-01-31T03:00:18Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:openlm-research/open_llama_3b",
"base_model:adapter:openlm-research/open_llama_3b",
"license:apache-2.0",
"region:us"
] | null | "2025-01-31T02:20:35Z" | ---
library_name: peft
license: apache-2.0
base_model: openlm-research/open_llama_3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cd535d2a-7335-4bdd-ab7f-b247933c2ef8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: openlm-research/open_llama_3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7d659d1d3be06d15_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7d659d1d3be06d15_train_data.json
type:
field_input: ''
field_instruction: text
field_output: text_cleaned
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/cd535d2a-7335-4bdd-ab7f-b247933c2ef8
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/7d659d1d3be06d15_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f726e984-621b-4576-a8f1-206d74638cdd
wandb_project: Birthday-SN56-24-Gradients-On-Demand
wandb_run: your_name
wandb_runid: f726e984-621b-4576-a8f1-206d74638cdd
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cd535d2a-7335-4bdd-ab7f-b247933c2ef8
This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 0.8967 |
| 0.833 | 0.0003 | 13 | 0.6719 |
| 0.6248 | 0.0005 | 26 | 0.5858 |
| 0.6159 | 0.0008 | 39 | 0.5584 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lewtun/dummy-model | lewtun | "2024-02-21T09:58:39Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:HuggingFaceH4/qwen-1.5-0.5b-ift",
"base_model:finetune:HuggingFaceH4/qwen-1.5-0.5b-ift",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-13T16:05:25Z" | ---
base_model: HuggingFaceH4/qwen-1.5-0.5b-ift
tags:
- generated_from_trainer
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [HuggingFaceH4/qwen-1.5-0.5b-ift](https://huggingface.co/HuggingFaceH4/qwen-1.5-0.5b-ift) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
rwheel/q-FrozenLake-v1-8x8-noSlippery | rwheel | "2022-12-14T10:01:26Z" | 0 | 0 | null | [
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2022-12-14T10:01:22Z" | ---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rwheel/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
annabellehuether/topic-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_01wd | annabellehuether | "2023-12-03T23:24:12Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-12-03T22:46:30Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: topic-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_01wd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# topic-bert-base-uncased-supreme-court-32batch_3epoch_2e5lr_01wd
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8248
- Accuracy: 0.7443
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 7
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2978 | 1.0 | 660 | 0.9208 | 0.7136 |
| 0.8199 | 2.0 | 1320 | 0.8443 | 0.7377 |
| 0.6824 | 3.0 | 1980 | 0.8248 | 0.7443 |
### Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
mradermacher/Qwen2.5-7B-RRP-1M-GGUF | mradermacher | "2025-01-27T10:30:29Z" | 404 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Qwen2.5-7B-RRP-1M",
"base_model:quantized:bunnycore/Qwen2.5-7B-RRP-1M",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-27T03:38:44Z" | ---
base_model: bunnycore/Qwen2.5-7B-RRP-1M
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Qwen2.5-7B-RRP-1M
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-7B-RRP-1M-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-RRP-1M-GGUF/resolve/main/Qwen2.5-7B-RRP-1M.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-RRP-1M-GGUF/resolve/main/Qwen2.5-7B-RRP-1M.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-RRP-1M-GGUF/resolve/main/Qwen2.5-7B-RRP-1M.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-RRP-1M-GGUF/resolve/main/Qwen2.5-7B-RRP-1M.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-RRP-1M-GGUF/resolve/main/Qwen2.5-7B-RRP-1M.IQ4_XS.gguf) | IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-RRP-1M-GGUF/resolve/main/Qwen2.5-7B-RRP-1M.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-RRP-1M-GGUF/resolve/main/Qwen2.5-7B-RRP-1M.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-RRP-1M-GGUF/resolve/main/Qwen2.5-7B-RRP-1M.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-RRP-1M-GGUF/resolve/main/Qwen2.5-7B-RRP-1M.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-RRP-1M-GGUF/resolve/main/Qwen2.5-7B-RRP-1M.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-RRP-1M-GGUF/resolve/main/Qwen2.5-7B-RRP-1M.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-7B-RRP-1M-GGUF/resolve/main/Qwen2.5-7B-RRP-1M.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
cansen88/PromptGenerator_5_topic | cansen88 | "2022-08-10T21:07:10Z" | 4 | 0 | transformers | [
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-08-10T20:51:09Z" | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: PromptGenerator_5_topic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# PromptGenerator_5_topic
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 10.6848
- Validation Loss: 10.6672
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -999, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.6864 | 10.6743 | 0 |
| 10.7045 | 10.6736 | 1 |
| 10.7114 | 10.6722 | 2 |
| 10.7082 | 10.6701 | 3 |
| 10.6848 | 10.6672 | 4 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.8.2
- Datasets 2.4.0
- Tokenizers 0.12.1
|
TFOCUS/v1v-testonbet_2 | TFOCUS | "2025-03-22T04:42:51Z" | 0 | 0 | null | [
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-03-22T04:35:19Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
kowsiknd/checkpoint-13500-finetuned2 | kowsiknd | "2023-11-22T11:23:26Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-22T11:12:39Z" | ---
tags:
- generated_from_trainer
model-index:
- name: checkpoint-13500-finetuned2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoint-13500-finetuned2
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1
- Datasets 2.14.6
- Tokenizers 0.14.1
|
John6666/3010nc-xx-mixpony-v25-sdxl | John6666 | "2024-07-23T06:57:52Z" | 100 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-07-23T06:53:14Z" | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- pony
---
Original model is [here](https://civitai.com/models/548205?modelVersionId=665891).
|
LLM-GAT/llama-3-8b-instruct-rmu-checkpoint-4 | LLM-GAT | "2024-08-03T17:32:03Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-03T17:25:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kaustubhjata/PhishGuard | kaustubhjata | "2024-09-29T08:50:48Z" | 5 | 1 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"arxiv:1910.09700",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-09-29T07:24:54Z" | ---
language:
- en
metrics:
- accuracy
base_model:
- google-bert/bert-base-uncased
library_name: transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yashmaurya01/TinyLlama-1.1B-Chat-v1.0_with_onnx | yashmaurya01 | "2024-09-23T17:37:29Z" | 27 | 0 | null | [
"onnx",
"safetensors",
"llama",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:HuggingFaceH4/ultrachat_200k",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"license:apache-2.0",
"region:us"
] | null | "2024-09-23T17:32:25Z" | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
widget:
- example_title: Fibonacci (Python)
messages:
- role: system
content: You are a chatbot who can help code!
- role: user
content: Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs 🚀🚀. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). **We follow [HF's Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [🤗 TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4."
#### How to use
You will need the transformers>=4.34
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# ...
``` |
botcon/Somethings | botcon | "2023-11-21T09:28:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2023-11-15T08:30:20Z" | Main Script: QuestionAnswering.py
The script uses HuggingFace library for managing the datasets, importing/exporting models and training the models.
There are various variables at the start of the script.
- train: Training a new model
- PEFT: Whether to use PEFT during training
- tf32/fp16: Mixed precision training choice
- trained_model: Name of trained model (to be pushed to HF Hub)
- train_checkpoint: Checkpoint of training (None by default)
- squad_shift: Whether to include extra data (squadshift)
- base_tokenizer: Tokenizer of base model
- base_model: Pre-trained model
- test: Testing a model
- tokenizer_list/model_list/question_list: Which tokenizer, model and questions to be tested.
CUDA is enabled if applicable.
Require user to login into HuggingFace Hub (via command line token or through script) if training. Alternative is to not push to hub, a local repository will be created.
Huggingface repositories created (models created)
- botcon/XLNET_squad_finetuned_large
- botcon/XLNET_squadshift_finetuned_large
- botcon/LUKE_squad_finetuned_large
- botcon/LUKE_squadshift_finetuned_large
- botcon/LUKE_squad_what
|
hazyresearch/mamba-1b-50b | hazyresearch | "2024-04-20T22:14:47Z" | 151 | 0 | transformers | [
"transformers",
"pytorch",
"en",
"dataset:EleutherAI/pile",
"arxiv:2402.18668",
"endpoints_compatible",
"region:us"
] | null | "2024-04-20T07:09:14Z" | ---
datasets:
- EleutherAI/pile
language:
- en
---
# Model Card
This model is pretrained as a reference baseline to the Based model provided here: https://huggingface.co/hazyresearch/based-1b-50b.
Both checkpoints are pretrained on **50Bn tokens** of the Pile in the exact same data order using next token prediction.
A WandB report for training is here: https://api.wandb.ai/links/hazy-research/ggo9rst2
### Model Sources
The model is a standard Mamba model using the model code provided here: https://github.com/state-spaces/mamba/tree/main/mamba_ssm
The training code is provided here and can be used to reproduce training: https://github.com/HazyResearch/based
The paper for the work is here, and the appendix includes additional experimental details/hyperparameters: https://arxiv.org/abs/2402.18668
### Uses
The purpose of this work is to evaluate the language modeling quality of a new efficient architecture, Based.
We include a series of benchmarks that you can use to evaluate quality:
- FDA: https://huggingface.co/datasets/hazyresearch/based-fda
- SWDE: https://huggingface.co/datasets/hazyresearch/based-swde
- SQUAD: https://huggingface.co/datasets/hazyresearch/based-squad
## Citation
Please consider citing this paper if you use our work:
```
@article{arora2024simple,
title={Simple linear attention language models balance the recall-throughput tradeoff},
author={Arora, Simran and Eyuboglu, Sabri and Zhang, Michael and Timalsina, Aman and Alberti, Silas and Zinsley, Dylan and Zou, James and Rudra, Atri and Ré, Christopher},
journal={arXiv:2402.18668},
year={2024}
}
```
Please reach out to [email protected], [email protected], and [email protected] with questions. |
hmehta92/mdeberta-v3-ict-content-ep15 | hmehta92 | "2023-02-17T17:38:53Z" | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"deberta-v2",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-02-17T17:34:53Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2220 with parameters:
```
{'batch_size': 1024, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 3330,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 64, 'do_lower_case': False}) with Transformer model: DebertaV2Model
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
hongngo/9dd16ffb-8670-42ce-8c8f-d54cceb7625d | hongngo | "2025-01-30T07:59:26Z" | 5 | 0 | peft | [
"peft",
"safetensors",
"gpt_neox",
"axolotl",
"generated_from_trainer",
"base_model:databricks/dolly-v2-3b",
"base_model:adapter:databricks/dolly-v2-3b",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-30T07:22:32Z" | ---
library_name: peft
license: mit
base_model: databricks/dolly-v2-3b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9dd16ffb-8670-42ce-8c8f-d54cceb7625d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: databricks/dolly-v2-3b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2889b1b55a1bfe58_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2889b1b55a1bfe58_train_data.json
type:
field_instruction: section
field_output: generations
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: hongngo/9dd16ffb-8670-42ce-8c8f-d54cceb7625d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/2889b1b55a1bfe58_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: fa5de4f5-f662-4076-87ce-91e6ead426a4
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: fa5de4f5-f662-4076-87ce-91e6ead426a4
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 9dd16ffb-8670-42ce-8c8f-d54cceb7625d
This model is a fine-tuned version of [databricks/dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.8731 | 0.3509 | 200 | 1.5425 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
wajidlinux99/gibberish-text-detector | wajidlinux99 | "2024-10-07T08:34:52Z" | 36,089 | 5 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"text",
"nlp",
"correction",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-01-16T11:46:09Z" | ---
language:
- en
pipeline_tag: text-classification
tags:
- text
- nlp
- correction
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 492513457
- CO2 Emissions (in grams): 5.527544460835904
## Validation Metrics
- Loss: 0.07609463483095169
- Accuracy: 0.9735624586913417
- Macro F1: 0.9736173135739408
- Micro F1: 0.9735624586913417
- Weighted F1: 0.9736173135739408
- Macro Precision: 0.9737771415197378
- Micro Precision: 0.9735624586913417
- Weighted Precision: 0.9737771415197378
- Macro Recall: 0.9735624586913417
- Micro Recall: 0.9735624586913417
- Weighted Recall: 0.9735624586913417
## Usage
You can use CURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Is this text really worth it?"}' https://api-inference.huggingface.co/models/wajidlinux99/gibberish-text-detector
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("wajidlinux99/gibberish-text-detector", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("wajidlinux99/gibberish-text-detector", use_auth_token=True)
inputs = tokenizer("Is this text really worth it?", return_tensors="pt")
outputs = model(**inputs)
```
# Original Repository
***madhurjindal/autonlp-Gibberish-Detector-492513457 |
wza/stock_multi_modal | wza | "2023-06-18T06:33:06Z" | 12 | 0 | transformers | [
"transformers",
"pytorch",
"llava",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-06-17T14:17:54Z" | # Description
Trained on top of llava-13b-v1: https://huggingface.co/wza/llava-13b-v1 (github: https://github.com/haotian-liu/LLaVA)
# Dataset
Constructed dataset on stock k lines, both pre-train and instrcution-tune
# Training scripts
pre-train:
```
torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \
LLaVA/llava/train/train_mem.py \
--model_name_or_path llava-13b-v1 \
--data_path JsonFormatDataset/PretrainData/data.json \
--image_folder JsonFormatDataset/PretrainData/images \
--vision_tower openai/clip-vit-large-patch14 \
--tune_mm_mlp_adapter True \
--mm_vision_select_layer -2 \
--mm_use_im_start_end \
--bf16 True \
--output_dir ./checkpoints/llava-13b-pretrain \
--num_train_epochs 1 \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 2 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 2400 \
--save_total_limit 1 \
--learning_rate 2e-3 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--lazy_preprocess True \
--report_to wandb
```
instruction:
```
torchrun --nnodes=1 --nproc_per_node=8 --master_port=25001 \
LLaVA/llava/train/train_mem.py \
--model_name_or_path ./checkpoints/llava-13b-pretrain \
--data_path JsonFormatDataset/InstructionTuneData/data.json \
--image_folder JsonFormatDataset/InstructionTuneData/images/ \
--vision_tower openai/clip-vit-large-patch14 \
--mm_vision_select_layer -2 \
--mm_use_im_start_end \
--bf16 True \
--output_dir ./checkpoints/llava-13b-instruction \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 1 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 5000 \
--save_total_limit 3 \
--learning_rate 2e-5 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--tf32 True \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--model_max_length 2048 \
--gradient_checkpointing True \
--lazy_preprocess True \
--report_to wandb
```
# Training settings
8xA100-80G-sxm4
Pre-train: https://wandb.ai/wzaa/huggingface/runs/cd5ou876/overview?workspace=user-wangziao1993
Fine-tune: https://wandb.ai/wzaa/huggingface/runs/y5bsz8dw/overview?workspace=user-wangziao1993 |
xujinheng666/model_name | xujinheng666 | "2025-02-22T08:44:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-22T08:43:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
harshit777/codellama2-finetuned-codex | harshit777 | "2023-09-06T09:25:01Z" | 0 | 0 | null | [
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:finetune:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | "2023-09-06T08:00:32Z" | ---
license: llama2
base_model: codellama/CodeLlama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: codellama2-finetuned-codex
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codellama2-finetuned-codex
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.33.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
lesso16/caa9cb1f-600c-4884-b42a-1d8e656bc7a1 | lesso16 | "2025-01-23T18:30:51Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"base_model:Xenova/tiny-random-Phi3ForCausalLM",
"base_model:adapter:Xenova/tiny-random-Phi3ForCausalLM",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-23T18:29:24Z" | ---
library_name: peft
base_model: Xenova/tiny-random-Phi3ForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: caa9cb1f-600c-4884-b42a-1d8e656bc7a1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Xenova/tiny-random-Phi3ForCausalLM
bf16: auto
chat_template: llama3
datasets:
- data_files:
- ac52733544e3c235_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ac52733544e3c235_train_data.json
type:
field_instruction: problem
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso16/caa9cb1f-600c-4884-b42a-1d8e656bc7a1
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/ac52733544e3c235_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d02e1ae5-13e6-4bae-95ee-6a355e82ebd5
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d02e1ae5-13e6-4bae-95ee-6a355e82ebd5
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# caa9cb1f-600c-4884-b42a-1d8e656bc7a1
This model is a fine-tuned version of [Xenova/tiny-random-Phi3ForCausalLM](https://huggingface.co/Xenova/tiny-random-Phi3ForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.1369 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
stablediffusionapi/xsachi-interiordesgi | stablediffusionapi | "2025-01-20T11:26:45Z" | 28 | 1 | diffusers | [
"diffusers",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-08-02T09:50:56Z" | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# XSAchi-InteriorDesginV5ForCN API Inference

## Get API Key
Get API key from [ModelsLab](https://modelslab.com/), No Payment needed.
Replace Key in below code, change **model_id** to "xsachi-interiordesgi"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/xsachi-interiordesgi)
Model link: [View model](https://stablediffusionapi.com/models/xsachi-interiordesgi)
Credits: [View credits](https://civitai.com/?query=XSAchi-InteriorDesginV5ForCN)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "xsachi-interiordesgi",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
swardiantara/ADFLER-roberta-base | swardiantara | "2024-11-14T16:28:56Z" | 125 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"token-classification",
"drone-forensics",
"event-recognition",
"en",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-11-14T11:18:03Z" | ---
pipeline_tag: token-classification
tags:
- drone-forensics
- event-recognition
license: mit
language:
- en
base_model:
- FacebookAI/roberta-base
library_name: transformers
---
# ADFLER-roberta-base
This is a [roberta-base](https://huggingface.co/FacebookAI/roberta-base) model fine-tuned on a collection of drone flight log messages: It performs log event recognition by assigning NER tag to each token within the input message using the BIOES tagging scheme.
For more detailed information about the model, please refer to the Roberta's model card.
<!--- Describe your model here -->
## Intended Use

- Use to split log records into sentences as well as detecting if the sentence is an event message or not.
- This model is trained diverse drone log messages from various models acquired from [Air Data](https://app.airdata.com/wiki/Notifications/)
## Usage (Transformers)
Using this model becomes easy when you have [transformers](https://www.SBERT.net) installed:
```
pip install -U transformers
```
Then you can use the model like this:
```python
>>> from transformers import pipeline
>>> model = pipeline('ner', model='swardiantara/ADFLER-roberta-base')
>>> model("Unknown Error, Cannot Takeoff. Contact DJI support.")
[{'entity': 'B-Event',
'score': np.float32(0.9991462),
'index': 1,
'word': 'Unknown',
'start': 0,
'end': 7},
{'entity': 'E-Event',
'score': np.float32(0.9971226),
'index': 2,
'word': 'ĠError',
'start': 8,
'end': 13},
{'entity': 'B-Event',
'score': np.float32(0.9658275),
'index': 4,
'word': 'ĠCannot',
'start': 15,
'end': 21},
{'entity': 'E-Event',
'score': np.float32(0.9913662),
'index': 5,
'word': 'ĠTake',
'start': 22,
'end': 26},
{'entity': 'E-Event',
'score': np.float32(0.9961124),
'index': 6,
'word': 'off',
'start': 26,
'end': 29},
{'entity': 'B-NonEvent',
'score': np.float32(0.9994654),
'index': 8,
'word': 'ĠContact',
'start': 31,
'end': 38},
{'entity': 'I-NonEvent',
'score': np.float32(0.9946643),
'index': 9,
'word': 'ĠDJ',
'start': 39,
'end': 41},
{'entity': 'I-NonEvent',
'score': np.float32(0.8926663),
'index': 10,
'word': 'I',
'start': 41,
'end': 42},
{'entity': 'E-NonEvent',
'score': np.float32(0.9982748),
'index': 11,
'word': 'Ġsupport',
'start': 43,
'end': 50}]
```
## Citing & Authors
```bibtex
@misc{albert_ner_model,
author={Silalahi, Swardiantara and Ahmad, Tohari and Studiawan, Hudan},
title = {RoBERTa Model for Drone Flight Log Event Recognition},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face Hub}
}
```
<!--- Describe where people can find more information --> |
mradermacher/gemma-3-4b-it-abliterated-GGUF | mradermacher | "2025-04-06T02:55:36Z" | 1,146 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mlabonne/gemma-3-4b-it-abliterated",
"base_model:quantized:mlabonne/gemma-3-4b-it-abliterated",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-18T14:54:42Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
zarakiquemparte/zarablend-1.1-l2-7b-GGUF | zarakiquemparte | "2023-08-26T10:22:08Z" | 13 | 3 | null | [
"gguf",
"llama2",
"license:other",
"region:us"
] | null | "2023-08-26T02:52:17Z" | ---
license: other
tags:
- llama2
---
Quantized GGUF of [Zarablend 1.1 L2 7b](https://huggingface.co/zarakiquemparte/zarablend-1.1-l2-7b)
|
merve/chatgpt-prompts-bart | merve | "2023-01-25T13:30:51Z" | 5 | 5 | transformers | [
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-01-25T13:27:13Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: chatgpt-prompts-bart
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# chatgpt-prompts-bart
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5312
- Validation Loss: 2.3834
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 6.0817 | 3.2132 | 0 |
| 3.4901 | 2.7244 | 1 |
| 2.9792 | 2.5214 | 2 |
| 2.7311 | 2.4116 | 3 |
| 2.5312 | 2.3834 | 4 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Blazgo/temp-model-for-2-mini-004 | Blazgo | "2025-02-26T00:17:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:CultriX/Qwen2.5-14B-ReasoningMerge",
"base_model:merge:CultriX/Qwen2.5-14B-ReasoningMerge",
"base_model:arcee-ai/Virtuoso-Small-v2",
"base_model:merge:arcee-ai/Virtuoso-Small-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-26T00:11:25Z" | ---
base_model:
- CultriX/Qwen2.5-14B-ReasoningMerge
- arcee-ai/Virtuoso-Small-v2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [arcee-ai/Virtuoso-Small-v2](https://huggingface.co/arcee-ai/Virtuoso-Small-v2) as a base.
### Models Merged
The following models were included in the merge:
* [CultriX/Qwen2.5-14B-ReasoningMerge](https://huggingface.co/CultriX/Qwen2.5-14B-ReasoningMerge)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: arcee-ai/Virtuoso-Small-v2
parameters:
density: 0.5
weight: 0.5
- model: CultriX/Qwen2.5-14B-ReasoningMerge
parameters:
density: 0.5
weight: 0.5
merge_method: ties
base_model: arcee-ai/Virtuoso-Small-v2
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
thangvip/vi-t5-reward-model-6-epochs | thangvip | "2024-01-22T05:10:52Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text-classification",
"generated_from_trainer",
"base_model:thangvip/vi-t5-base-finetune-rewriter-5-epochs",
"base_model:finetune:thangvip/vi-t5-base-finetune-rewriter-5-epochs",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-22T05:10:13Z" | ---
license: mit
base_model: thangvip/vi-t5-base-finetune-rewriter-5-epochs
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vi-t5-reward-model-6-epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vi-t5-reward-model-6-epochs
This model is a fine-tuned version of [thangvip/vi-t5-base-finetune-rewriter-5-epochs](https://huggingface.co/thangvip/vi-t5-base-finetune-rewriter-5-epochs) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4914
- Accuracy: 0.8106
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ussipan/Llama-3.2-SipanGPT-v0.5-GGUF | ussipan | "2024-12-23T03:16:15Z" | 184 | 0 | transformers | [
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"text2text-generation",
"en",
"dataset:ussipan/sipangpt",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text2text-generation | "2024-12-22T23:22:58Z" | ---
base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
datasets:
- ussipan/sipangpt
pipeline_tag: text2text-generation
---
# SipánGPT 0.5 Llama 3.2 1B GGUF
- Modelo pre-entrenado para responder preguntas de la Universidad Señor de Sipán de Lambayeque, Perú.
- Pre-trained model to answer questions from the Señor de Sipán University of Lambayeque, Peru.
## Testing the model
- Entrenado con 304000 conversaciones, el modelo puede generar alucinaciones.
- Trained with 304000 conversations, the model can generate hallucinations
# Uploaded model
- **Developed by:** ussipan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
---
## SipánGPT 0.5 Llama 3.2 1B GGUF
<div style="display: flex; align-items: center; height: fit-content;">
<img src="https://avatars.githubusercontent.com/u/60937214?v=4" width="40" style="margin-right: 10px;"/>
<span>Hecho con ❤️ por Jhan Gómez P.</span>
</div> |
RayneAmes/suika_v1 | RayneAmes | "2025-02-10T18:39:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-10T18:36:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
KingKazma/cnn_dailymail_t5-small_p_tuning_500_10_3000_8_e4_s55555_v4_l4_v50 | KingKazma | "2023-08-13T14:12:30Z" | 1 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-13T14:12:27Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
nguyenvuvn/llama201204 | nguyenvuvn | "2025-04-13T08:58:37Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | "2025-04-13T08:55:13Z" | ---
license: apache-2.0
---
|
fydhfzh/hubert-classifier-aug-ref | fydhfzh | "2024-07-17T17:21:44Z" | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/hubert-base-ls960",
"base_model:finetune:facebook/hubert-base-ls960",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-07-12T05:56:50Z" | ---
license: apache-2.0
base_model: facebook/hubert-base-ls960
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: hubert-classifier-aug-ref
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-classifier-aug-ref
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1461
- Accuracy: 0.1671
- Precision: 0.0661
- Recall: 0.1671
- F1: 0.0830
- Binary: 0.4137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Binary |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|
| No log | 0.19 | 50 | 4.4123 | 0.0377 | 0.0239 | 0.0377 | 0.0213 | 0.2075 |
| No log | 0.38 | 100 | 4.3574 | 0.0674 | 0.0177 | 0.0674 | 0.0253 | 0.2741 |
| No log | 0.58 | 150 | 4.2332 | 0.0323 | 0.0017 | 0.0323 | 0.0032 | 0.2884 |
| No log | 0.77 | 200 | 4.1388 | 0.0647 | 0.0160 | 0.0647 | 0.0182 | 0.3380 |
| No log | 0.96 | 250 | 4.0567 | 0.0674 | 0.0350 | 0.0674 | 0.0222 | 0.3407 |
| No log | 1.15 | 300 | 4.0043 | 0.0566 | 0.0114 | 0.0566 | 0.0143 | 0.3221 |
| No log | 1.34 | 350 | 3.9470 | 0.0485 | 0.0049 | 0.0485 | 0.0080 | 0.3221 |
| No log | 1.53 | 400 | 3.8803 | 0.0593 | 0.0124 | 0.0593 | 0.0135 | 0.3353 |
| No log | 1.73 | 450 | 3.8326 | 0.0566 | 0.0057 | 0.0566 | 0.0097 | 0.3323 |
| 4.1711 | 1.92 | 500 | 3.7760 | 0.0566 | 0.0061 | 0.0566 | 0.0103 | 0.3356 |
| 4.1711 | 2.11 | 550 | 3.7454 | 0.0647 | 0.0066 | 0.0647 | 0.0118 | 0.3372 |
| 4.1711 | 2.3 | 600 | 3.7036 | 0.0701 | 0.0075 | 0.0701 | 0.0132 | 0.3429 |
| 4.1711 | 2.49 | 650 | 3.6729 | 0.0728 | 0.0094 | 0.0728 | 0.0161 | 0.3431 |
| 4.1711 | 2.68 | 700 | 3.6306 | 0.0728 | 0.0117 | 0.0728 | 0.0177 | 0.3461 |
| 4.1711 | 2.88 | 750 | 3.6075 | 0.0836 | 0.0155 | 0.0836 | 0.0237 | 0.3536 |
| 4.1711 | 3.07 | 800 | 3.5817 | 0.0943 | 0.0284 | 0.0943 | 0.0285 | 0.3604 |
| 4.1711 | 3.26 | 850 | 3.5607 | 0.0916 | 0.0179 | 0.0916 | 0.0272 | 0.3577 |
| 4.1711 | 3.45 | 900 | 3.5373 | 0.0943 | 0.0214 | 0.0943 | 0.0304 | 0.3588 |
| 4.1711 | 3.64 | 950 | 3.5083 | 0.1078 | 0.0357 | 0.1078 | 0.0464 | 0.3714 |
| 3.7424 | 3.84 | 1000 | 3.4717 | 0.1105 | 0.0512 | 0.1105 | 0.0520 | 0.3765 |
| 3.7424 | 4.03 | 1050 | 3.4619 | 0.1213 | 0.0361 | 0.1213 | 0.0489 | 0.3825 |
| 3.7424 | 4.22 | 1100 | 3.4375 | 0.1240 | 0.0453 | 0.1240 | 0.0554 | 0.3844 |
| 3.7424 | 4.41 | 1150 | 3.4282 | 0.1267 | 0.0390 | 0.1267 | 0.0547 | 0.3849 |
| 3.7424 | 4.6 | 1200 | 3.4076 | 0.1267 | 0.0334 | 0.1267 | 0.0493 | 0.3838 |
| 3.7424 | 4.79 | 1250 | 3.3875 | 0.1078 | 0.0263 | 0.1078 | 0.0388 | 0.3730 |
| 3.7424 | 4.99 | 1300 | 3.3746 | 0.1240 | 0.0547 | 0.1240 | 0.0496 | 0.3822 |
| 3.7424 | 5.18 | 1350 | 3.3459 | 0.1375 | 0.0621 | 0.1375 | 0.0618 | 0.3946 |
| 3.7424 | 5.37 | 1400 | 3.3313 | 0.1375 | 0.0598 | 0.1375 | 0.0650 | 0.3946 |
| 3.7424 | 5.56 | 1450 | 3.3263 | 0.1429 | 0.0556 | 0.1429 | 0.0623 | 0.3951 |
| 3.5358 | 5.75 | 1500 | 3.3100 | 0.1348 | 0.0629 | 0.1348 | 0.0640 | 0.3895 |
| 3.5358 | 5.94 | 1550 | 3.2880 | 0.1402 | 0.0637 | 0.1402 | 0.0641 | 0.3957 |
| 3.5358 | 6.14 | 1600 | 3.2742 | 0.1402 | 0.0628 | 0.1402 | 0.0640 | 0.3965 |
| 3.5358 | 6.33 | 1650 | 3.2605 | 0.1509 | 0.0861 | 0.1509 | 0.0786 | 0.4049 |
| 3.5358 | 6.52 | 1700 | 3.2480 | 0.1429 | 0.0626 | 0.1429 | 0.0663 | 0.3976 |
| 3.5358 | 6.71 | 1750 | 3.2435 | 0.1482 | 0.0575 | 0.1482 | 0.0665 | 0.4030 |
| 3.5358 | 6.9 | 1800 | 3.2324 | 0.1482 | 0.0619 | 0.1482 | 0.0670 | 0.4022 |
| 3.5358 | 7.09 | 1850 | 3.2193 | 0.1563 | 0.0806 | 0.1563 | 0.0799 | 0.4070 |
| 3.5358 | 7.29 | 1900 | 3.2122 | 0.1644 | 0.0825 | 0.1644 | 0.0865 | 0.4119 |
| 3.5358 | 7.48 | 1950 | 3.1995 | 0.1617 | 0.0776 | 0.1617 | 0.0836 | 0.4108 |
| 3.4065 | 7.67 | 2000 | 3.1945 | 0.1617 | 0.0771 | 0.1617 | 0.0837 | 0.4116 |
| 3.4065 | 7.86 | 2050 | 3.1851 | 0.1725 | 0.0832 | 0.1725 | 0.0919 | 0.4191 |
| 3.4065 | 8.05 | 2100 | 3.1805 | 0.1617 | 0.0592 | 0.1617 | 0.0776 | 0.4100 |
| 3.4065 | 8.25 | 2150 | 3.1729 | 0.1617 | 0.0573 | 0.1617 | 0.0762 | 0.4100 |
| 3.4065 | 8.44 | 2200 | 3.1696 | 0.1617 | 0.0571 | 0.1617 | 0.0750 | 0.4100 |
| 3.4065 | 8.63 | 2250 | 3.1638 | 0.1644 | 0.0651 | 0.1644 | 0.0781 | 0.4119 |
| 3.4065 | 8.82 | 2300 | 3.1597 | 0.1590 | 0.0540 | 0.1590 | 0.0735 | 0.4089 |
| 3.4065 | 9.01 | 2350 | 3.1548 | 0.1671 | 0.0688 | 0.1671 | 0.0860 | 0.4137 |
| 3.4065 | 9.2 | 2400 | 3.1540 | 0.1617 | 0.0623 | 0.1617 | 0.0798 | 0.4100 |
| 3.4065 | 9.4 | 2450 | 3.1489 | 0.1644 | 0.0661 | 0.1644 | 0.0820 | 0.4119 |
| 3.3382 | 9.59 | 2500 | 3.1493 | 0.1644 | 0.0706 | 0.1644 | 0.0820 | 0.4119 |
| 3.3382 | 9.78 | 2550 | 3.1464 | 0.1671 | 0.0661 | 0.1671 | 0.0831 | 0.4137 |
| 3.3382 | 9.97 | 2600 | 3.1461 | 0.1671 | 0.0661 | 0.1671 | 0.0830 | 0.4137 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
edumunozsala/Qwen2-0.5B-mntp-simcse | edumunozsala | "2024-12-01T12:15:00Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | "2024-12-01T11:51:38Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dracero/a2c-PandaReachDense-v3 | dracero | "2023-09-29T16:51:27Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-09-29T16:45:58Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.25 +/- 0.13
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tejasreereddy/mistral-test | tejasreereddy | "2024-02-27T12:40:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-02-27T10:19:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kostiantynk-out/81826e1b-5967-41e5-9b62-b14c1e513f09 | kostiantynk-out | "2025-01-21T17:05:52Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/Yarn-Llama-2-13b-128k",
"base_model:adapter:NousResearch/Yarn-Llama-2-13b-128k",
"region:us"
] | null | "2025-01-21T16:59:12Z" | ---
library_name: peft
base_model: NousResearch/Yarn-Llama-2-13b-128k
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 81826e1b-5967-41e5-9b62-b14c1e513f09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Yarn-Llama-2-13b-128k
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4bf83d22f24808a4_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4bf83d22f24808a4_train_data.json
type:
field_input: wrong_code
field_instruction: problem_description
field_output: acc_code
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk-out/81826e1b-5967-41e5-9b62-b14c1e513f09
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/4bf83d22f24808a4_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8b603b7a-1a29-4539-bdd7-98a7693fd4f6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8b603b7a-1a29-4539-bdd7-98a7693fd4f6
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 81826e1b-5967-41e5-9b62-b14c1e513f09
This model is a fine-tuned version of [NousResearch/Yarn-Llama-2-13b-128k](https://huggingface.co/NousResearch/Yarn-Llama-2-13b-128k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4376
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2406 | 0.0003 | 1 | 0.5361 |
| 2.9303 | 0.0010 | 3 | 0.5351 |
| 2.6369 | 0.0019 | 6 | 0.5189 |
| 1.6812 | 0.0029 | 9 | 0.4376 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
phongtintruong/meomeo-mhubert-vietbud-193-34 | phongtintruong | "2025-01-13T06:56:37Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"meomeo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-01-13T06:51:34Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
agni17/Llama_2_Fin_Adv | agni17 | "2024-06-11T07:49:43Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | "2024-06-10T17:17:26Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yuhan123/qwen-1.5-4b-semantics_var_3 | Yuhan123 | "2025-03-14T01:24:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-14T01:20:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MatricariaV/lm | MatricariaV | "2025-03-27T20:38:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-03-27T20:37:48Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
limcheekin/snowflake-arctic-embed-l-v2.0-GGUF | limcheekin | "2025-01-01T09:28:20Z" | 805 | 1 | null | [
"gguf",
"embeddings",
"f16",
"base_model:Snowflake/snowflake-arctic-embed-l-v2.0",
"base_model:quantized:Snowflake/snowflake-arctic-embed-l-v2.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | "2025-01-01T09:01:53Z" | ---
license: apache-2.0
base_model:
- Snowflake/snowflake-arctic-embed-l-v2.0
tags:
- gguf
- embeddings
- f16
---
# Model Card: Snowflake Arctic Embed L v2.0 (GGUF Quantized)
## Model Overview
This model is a GGUF-quantized version of [Snowflake's Arctic Embed L v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0), a state-of-the-art multilingual text embedding model designed for high-quality retrieval tasks. The quantization reduces the model's size and computational requirements, facilitating efficient deployment without significantly compromising performance.
## Model Details
- **Model Name:** snowflake-arctic-embed-l-v2.0-GGUF
- **Original Model:** [Snowflake's Arctic Embed L v2.0](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0)
- **Quantization Format:** GGUF
- **Parameters:** 568 million
- **Embedding Dimension:** 1,024
- **Languages Supported:** Multilingual (supports multiple languages)
- **Context Length:** Supports up to 8,192 tokens
- **License:** Apache 2.0
## Quantization Details
GGUF (Gerganov's General Unified Format) is a binary format optimized for efficient loading and inference of large language models. Quantization involves reducing the precision of the model's weights, resulting in decreased memory usage and faster computation with minimal impact on accuracy.
## Performance
The original Arctic Embed L v2.0 model achieves state-of-the-art performance on various retrieval benchmarks, including the MTEB Retrieval benchmark, with an NDCG@10 score of 55.98. The GGUF-quantized version aims to maintain this high performance while offering enhanced efficiency.
## Usage
This quantized model is suitable for deployment in resource-constrained environments where memory and computational efficiency are critical. It can be utilized for tasks such as information retrieval, semantic search, and other applications requiring high-quality text embeddings.
## Limitations
While quantization reduces resource requirements, it may introduce slight degradation in model performance. Users should evaluate the model in their specific use cases to ensure it meets the desired performance criteria.
## Acknowledgements
This quantized model is based on Snowflake's Arctic Embed L v2.0. For more details on the original model, please refer to the [official model card](https://huggingface.co/Snowflake/snowflake-arctic-embed-l-v2.0).
---
For a visual overview of Snowflake's Arctic Embed v2.0, you may find the following video informative:
https://www.youtube.com/watch?v=CmSZZkzghhU |
mradermacher/SpaghettiOs_7B-i1-GGUF | mradermacher | "2024-11-05T10:44:12Z" | 217 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jeiku/SpaghettiOs_7B",
"base_model:quantized:jeiku/SpaghettiOs_7B",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-11-05T10:19:09Z" | ---
base_model: jeiku/SpaghettiOs_7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jeiku/SpaghettiOs_7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SpaghettiOs_7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/SpaghettiOs_7B-i1-GGUF/resolve/main/SpaghettiOs_7B.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
f4b1an/clipart | f4b1an | "2024-06-09T11:37:46Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-06-09T11:36:58Z" | ---
license: creativeml-openrail-m
---
|
ryul99/use_data_finetuning | ryul99 | "2023-10-30T05:43:34Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"base_model:facebook/detr-resnet-50",
"base_model:finetune:facebook/detr-resnet-50",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | "2023-10-30T03:04:40Z" | ---
license: apache-2.0
base_model: facebook/detr-resnet-50
tags:
- generated_from_trainer
model-index:
- name: use_data_finetuning
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# use_data_finetuning
This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
Xu-Ouyang/pythia-410m-deduped-int2-step95000-GPTQ-wikitext2-uva | Xu-Ouyang | "2024-09-13T10:06:26Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
] | text-generation | "2024-09-13T10:06:09Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
appvoid/palmer-math-v-curve | appvoid | "2024-04-24T03:20:53Z" | 132 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:appvoid/palmer-003",
"base_model:merge:appvoid/palmer-003",
"base_model:microsoft/rho-math-1b-v0.1",
"base_model:merge:microsoft/rho-math-1b-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-24T03:19:42Z" | ---
base_model:
- microsoft/rho-math-1b-v0.1
- appvoid/palmer-003
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [microsoft/rho-math-1b-v0.1](https://huggingface.co/microsoft/rho-math-1b-v0.1)
* [appvoid/palmer-003](https://huggingface.co/appvoid/palmer-003)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: appvoid/palmer-003
- model: microsoft/rho-math-1b-v0.1
merge_method: slerp
base_model: appvoid/palmer-003
dtype: float16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
mingyue0101/super-cool-instruct | mingyue0101 | "2024-04-18T16:03:08Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"region:us"
] | null | "2024-04-18T16:02:03Z" | ---
library_name: peft
base_model: codellama/CodeLlama-7b-Instruct-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0 |
Okadak/gemma-3-ft-demo | Okadak | "2025-03-22T14:23:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-22T14:13:50Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
htlou/mm-interp-AA_preference_random_0_70 | htlou | "2025-01-01T10:23:11Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llava_next",
"image-text-to-text",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:llava-hf/llava-v1.6-mistral-7b-hf",
"base_model:finetune:llava-hf/llava-v1.6-mistral-7b-hf",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-01-01T09:35:47Z" | ---
library_name: transformers
license: other
base_model: llava-hf/llava-v1.6-mistral-7b-hf
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: AA_preference_random_0_70
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AA_preference_random_0_70
This model is a fine-tuned version of [llava-hf/llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) on the AA_preference_random_0_70 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5414
- Rewards/chosen: 0.5872
- Rewards/rejected: -1.5508
- Rewards/accuracies: 0.7768
- Rewards/margins: 2.1380
- Logps/rejected: -227.1739
- Logps/chosen: -233.6734
- Logits/rejected: -1.9224
- Logits/chosen: -1.9691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6034 | 0.5348 | 50 | 0.5869 | 0.7400 | -0.1413 | 0.7292 | 0.8813 | -213.0786 | -232.1449 | -2.5305 | -2.5372 |
| 0.2659 | 1.0695 | 100 | 0.5500 | 1.0886 | -0.4196 | 0.7679 | 1.5081 | -215.8615 | -228.6596 | -2.1146 | -2.1482 |
| 0.2599 | 1.6043 | 150 | 0.5465 | 0.7596 | -1.0841 | 0.7679 | 1.8437 | -222.5069 | -231.9491 | -2.1122 | -2.1442 |
| 0.1366 | 2.1390 | 200 | 0.5222 | 0.5890 | -1.4225 | 0.7857 | 2.0115 | -225.8904 | -233.6554 | -2.0824 | -2.1154 |
| 0.1488 | 2.6738 | 250 | 0.5411 | 0.5980 | -1.5260 | 0.7768 | 2.1240 | -226.9253 | -233.5653 | -1.9322 | -1.9781 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.20.3
|
Kardanil/leagaleasy-01-ai-Yi-6B-Chat | Kardanil | "2024-05-07T15:04:00Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:01-ai/Yi-6B-Chat",
"base_model:adapter:01-ai/Yi-6B-Chat",
"license:other",
"region:us"
] | null | "2024-05-07T15:02:39Z" | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: 01-ai/Yi-6B-Chat
datasets:
- generator
model-index:
- name: leagaleasy-01-ai-Yi-6B-Chat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# leagaleasy-01-ai-Yi-6B-Chat
This model is a fine-tuned version of [01-ai/Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
MaziyarPanahi/Experiment26Inex12-7B-GGUF | MaziyarPanahi | "2024-03-30T20:37:51Z" | 44 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"safetensors",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"base_model:rwitz/experiment26-truthy-iter-1",
"base_model:MSL7/INEX12-7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us",
"base_model:automerger/Experiment26Inex12-7B",
"base_model:quantized:automerger/Experiment26Inex12-7B"
] | text-generation | "2024-03-30T20:13:56Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- safetensors
- mistral
- text-generation
- merge
- mergekit
- lazymergekit
- automerger
- base_model:rwitz/experiment26-truthy-iter-1
- base_model:MSL7/INEX12-7b
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- text-generation-inference
- region:us
- text-generation
model_name: Experiment26Inex12-7B-GGUF
base_model: automerger/Experiment26Inex12-7B
inference: false
model_creator: automerger
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/Experiment26Inex12-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment26Inex12-7B-GGUF)
- Model creator: [automerger](https://huggingface.co/automerger)
- Original model: [automerger/Experiment26Inex12-7B](https://huggingface.co/automerger/Experiment26Inex12-7B)
## Description
[MaziyarPanahi/Experiment26Inex12-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment26Inex12-7B-GGUF) contains GGUF format model files for [automerger/Experiment26Inex12-7B](https://huggingface.co/automerger/Experiment26Inex12-7B).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/Experiment26Inex12-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment26Inex12-7B-GGUF) and below it, a specific filename to download, such as: Experiment26Inex12-7B-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/Experiment26Inex12-7B-GGUF Experiment26Inex12-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/Experiment26Inex12-7B-GGUF](https://huggingface.co/MaziyarPanahi/Experiment26Inex12-7B-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Experiment26Inex12-7B-GGUF Experiment26Inex12-7B.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m Experiment26Inex12-7B.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://github.com/abetlen/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./Experiment26Inex12-7B.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./Experiment26Inex12-7B.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) |
Abrar21/Eyellama_optc | Abrar21 | "2025-03-24T15:06:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"pytorch",
"en",
"dataset:QIAIUNCC/EYE-QA-PLUS",
"base_model:HuggingFaceH4/tiny-random-LlamaForCausalLM",
"base_model:finetune:HuggingFaceH4/tiny-random-LlamaForCausalLM",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-24T13:19:49Z" | Temporary Redirect. Redirecting to /api/resolve-cache/models/Abrar21/Eyellama_optc/7fe2dea48afeb413a4dc2fa3358e58f71f13c6a9/README.md?%2FAbrar21%2FEyellama_optc%2Fresolve%2Fmain%2FREADME.md=&etag=%22bab341db4b6e251a922cde9e6b69dca54571b7f7%22 |
0x1202/e22afb05-6f21-480f-9a58-333109c9bf3c | 0x1202 | "2025-01-15T14:53:16Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:princeton-nlp/gemma-2-9b-it-SimPO",
"base_model:adapter:princeton-nlp/gemma-2-9b-it-SimPO",
"license:mit",
"region:us"
] | null | "2025-01-15T14:13:17Z" | ---
library_name: peft
license: mit
base_model: princeton-nlp/gemma-2-9b-it-SimPO
tags:
- axolotl
- generated_from_trainer
model-index:
- name: e22afb05-6f21-480f-9a58-333109c9bf3c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: princeton-nlp/gemma-2-9b-it-SimPO
bf16: true
chat_template: llama3
data_processes: 16
dataset_prepared_path: null
datasets:
- data_files:
- 2cf635205ba16d1e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2cf635205ba16d1e_train_data.json
type:
field_input: content
field_instruction: title
field_output: combined
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: 0x1202/e22afb05-6f21-480f-9a58-333109c9bf3c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/2cf635205ba16d1e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 228ff6c3-a693-4550-a226-3e842e1eb5c7
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 228ff6c3-a693-4550-a226-3e842e1eb5c7
warmup_steps: 20
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# e22afb05-6f21-480f-9a58-333109c9bf3c
This model is a fine-tuned version of [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8793
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5634 | 0.0117 | 1 | 6.9915 |
| 0.6948 | 0.5848 | 50 | 0.8952 |
| 0.5409 | 1.1696 | 100 | 0.8878 |
| 0.6709 | 1.7544 | 150 | 0.8729 |
| 0.2772 | 2.3392 | 200 | 0.8793 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
leenaalsaghir/my-allam-7b | leenaalsaghir | "2025-03-02T21:15:42Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-02T21:11:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
betterdataai/large-tabular-model | betterdataai | "2025-02-27T11:04:27Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"text-generation",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-20T07:40:06Z" | ---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
- **Developed by:** betterdataai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
## Prerequisite
Following packages are needed to do the inference
```
unsloth
transformers
pandas
datasets
trl
torch
accelerate
scipy
```
## Model Demonstration
This is a large tabular model that can generate tabular data according to the user's data column description.
The example prompt looks like this:
```
instruction = """
You are tasked with generating a synthetic dataset based on the following description. The dataset represents network traffic information. The dataset should include the following columns:
- IPV4_SRC_ADDR (String): IPv4 source address, following the standard format (e.g., ""59.166.0.6"", ""149.171.126.0"",""175.45.176.2"").
- L4_SRC_PORT (Integer): IPv4 source port number, a value between 1024 and 65535 (e.g., 443).
- IPV4_DST_ADDR (String): IPv4 destination address, following the standard format (e.g., ""149.171.126.6"").
- L4_DST_PORT (Integer): IPv4 destination port number, a value between 1024 and 65535 (e.g., 80).
- PROTOCOL (Integer): IP protocol identifier byte, representing the protocol used (e.g., 6 for TCP or 17 for UDP).
- L7_PROTO (Integer): Layer 7 protocol (numeric), indicating the application protocol, ranging from 0 to 249 (e.g., 1 for HTTP, 2 for HTTPS).
- IN_BYTES (Integer): Incoming number of bytes, representing the data transferred into the network, ranging from 0 to 10,000,000 (e.g., 1500).
- OUT_BYTES (Integer): Outgoing number of bytes, representing the data transferred out of the network, ranging from 0 to 10,000,000 (e.g., 2000).
- IN_PKTS (Integer): Incoming number of packets, representing the count of packets entering the network (e.g., 120).
- OUT_PKTS (Integer): Outgoing number of packets, representing the count of packets leaving the network (e.g., 110).
- TCP_FLAGS (Integer): Cumulative of all TCP flags (e.g., 27, 0, 19, 18 ).
- FLOW_DURATION_MILLISECONDS (Integer): Flow duration in milliseconds, indicating how long the flow lasted (e.g., 15000).
- Label (Integer): Label for indicating malicious attack or not (e.g., 0 for benign traffic or 1 for attack)
"""
```
With following code, we can generate tabular data:
```
from unsloth import FastLanguageModel
from transformers import TextStreamer
max_seq_length = 2048
dtype = None
load_in_4bit = False
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "betterdataai/large-tabular-model",
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model)
messages = [{"role": "system", "content": instruction},
{"role": "user", "content": "Create 20 rows data}}"}]
inputs = tokenizer.apply_chat_template(
messages,
tokenize = True,
add_generation_prompt = True, # Must add for generation
return_tensors = "pt",
).to("cuda")
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 2048,
use_cache = True, temperature = 1.5, min_p = 0.1)
```
The output looks like this:
```
IPV4_SRC_ADDR,L4_SRC_PORT,IPV4_DST_ADDR,L4_DST_PORT,PROTOCOL,L7_PROTO,IN_BYTES,OUT_BYTES,IN_PKTS,OUT_PKTS,TCP_FLAGS,FLOW_DURATION_MILLISECONDS,Label
175.45.176.3,65502,149.171.126.11,80,6,7.0,800,1338,10,10,27,1429,0
59.166.0.2,51487,149.171.126.3,80,6,7.0,1580,10168,12,18,27,0,0
59.166.0.0,13943,149.171.126.0,11862,6,36.0,2334,16822,36,38,27,9,0
59.166.0.7,40294,149.171.126.7,21,6,1.0,2934,3740,52,54,27,844,0
59.166.0.9,63416,149.171.126.5,21,6,1.0,2934,3742,52,54,27,0,0
175.45.176.2,0,149.171.126.17,0,45,0.0,200,0,2,0,0,0,1
175.45.176.3,64403,149.171.126.14,179,6,13.0,472,336,10,8,19,538,0
59.166.0.8,39142,149.171.126.3,53,17,5.0,130,162,2,2,0,1,0
59.166.0.3,60342,149.171.126.4,25,6,3.0,37868,3380,54,42,27,35,0
59.166.0.3,40433,149.171.126.5,5190,6,0.0,2158,2464,24,24,27,6,0
59.166.0.0,21116,149.171.126.5,53,17,5.0,130,162,2,2,0,0,0
175.45.176.1,0,149.171.126.17,0,23,0.0,200,0,2,0,0,0,1
59.166.0.5,27940,149.171.126.2,21,6,1.0,2934,3738,52,54,27,4294952,0
59.166.0.2,14905,149.171.126.1,22,6,92.0,3728,5474,32,24,27,0,0
175.45.176.1,0,149.171.126.10,0,33,0.0,200,0,2,0,0,0,1
59.166.0.3,37986,149.171.126.0,5190,6,0.0,1470,1728,22,14,27,4,0
59.166.0.1,49949,149.171.126.7,80,6,7.0,1580,10168,12,18,27,4294952,0
59.166.0.2,51911,149.171.126.6,53,17,0.0,146,178,2,2,0,0,0
59.166.0.1,17727,149.171.126.9,5190,6,0.0,2158,2464,24,24,27,7,0
59.166.0.3,56144,149.171.126.0,5190,6,0.0,1470,1728,22,14,27,0,0<|eot_id|>
``` |
KPrashanth/Reinforce_Agent_playing_Cartpole_v1 | KPrashanth | "2023-07-06T04:36:55Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-06T04:36:41Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_Agent_playing_Cartpole_v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
faaany/kto-aligned-model | faaany | "2024-09-23T09:41:35Z" | 73 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"kto",
"generated_from_trainer",
"conversational",
"base_model:trl-lib/qwen1.5-1.8b-sft",
"base_model:finetune:trl-lib/qwen1.5-1.8b-sft",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-23T09:31:45Z" | ---
library_name: transformers
license: other
base_model: trl-lib/qwen1.5-1.8b-sft
tags:
- trl
- kto
- generated_from_trainer
model-index:
- name: kto-aligned-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kto-aligned-model
This model is a fine-tuned version of [trl-lib/qwen1.5-1.8b-sft](https://huggingface.co/trl-lib/qwen1.5-1.8b-sft) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
jungjongho/wav2vec2-large-xlsr-korean-demo-colab_epoch15 | jungjongho | "2022-07-29T21:25:56Z" | 3 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-07-29T16:39:04Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xlsr-korean-demo-colab_epoch15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-korean-demo-colab_epoch15
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4133
- Wer: 0.3801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 16.9017 | 0.8 | 400 | 4.6273 | 1.0 |
| 4.4633 | 1.6 | 800 | 4.4419 | 1.0 |
| 4.2262 | 2.4 | 1200 | 3.8477 | 0.9994 |
| 2.4402 | 3.21 | 1600 | 1.3564 | 0.8111 |
| 1.3499 | 4.01 | 2000 | 0.9070 | 0.6664 |
| 0.9922 | 4.81 | 2400 | 0.7496 | 0.6131 |
| 0.8271 | 5.61 | 2800 | 0.6240 | 0.5408 |
| 0.6918 | 6.41 | 3200 | 0.5506 | 0.5026 |
| 0.6015 | 7.21 | 3600 | 0.5303 | 0.4935 |
| 0.5435 | 8.02 | 4000 | 0.4951 | 0.4696 |
| 0.4584 | 8.82 | 4400 | 0.4677 | 0.4432 |
| 0.4258 | 9.62 | 4800 | 0.4602 | 0.4307 |
| 0.3906 | 10.42 | 5200 | 0.4456 | 0.4195 |
| 0.3481 | 11.22 | 5600 | 0.4265 | 0.4062 |
| 0.3216 | 12.02 | 6000 | 0.4241 | 0.4046 |
| 0.2908 | 12.83 | 6400 | 0.4106 | 0.3941 |
| 0.2747 | 13.63 | 6800 | 0.4146 | 0.3855 |
| 0.2633 | 14.43 | 7200 | 0.4133 | 0.3801 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
rak-r05/b1ef1e53-a37f-4d77-b036-19e30648c223 | rak-r05 | "2025-02-03T05:21:17Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3.5-mini-instruct",
"base_model:adapter:unsloth/Phi-3.5-mini-instruct",
"license:mit",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-03T04:50:35Z" | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b1ef1e53-a37f-4d77-b036-19e30648c223
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Phi-3.5-mini-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 39a31223f62f5174_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/39a31223f62f5174_train_data.json
type:
field_instruction: bug_function
field_output: functions
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: rak-r05/b1ef1e53-a37f-4d77-b036-19e30648c223
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0004
load_in_4bit: false
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 150
micro_batch_size: 2
mlflow_experiment_name: /tmp/39a31223f62f5174_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a97aac21-5407-4285-bbd1-1de28d289a7a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a97aac21-5407-4285-bbd1-1de28d289a7a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# b1ef1e53-a37f-4d77-b036-19e30648c223
This model is a fine-tuned version of [unsloth/Phi-3.5-mini-instruct](https://huggingface.co/unsloth/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 10.2103 | 0.0002 | 1 | nan |
| 12.2052 | 0.0070 | 38 | nan |
| 0.0 | 0.0140 | 76 | nan |
| 11.9777 | 0.0210 | 114 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
huggingtweets/claregrall | huggingtweets | "2022-06-02T13:17:25Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-06-02T13:01:10Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/claregrall/1654175841134/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1197255800114339842/9ptyNMcO_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Clare Grall</div>
<div style="text-align: center; font-size: 14px;">@claregrall</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Clare Grall.
| Data | Clare Grall |
| --- | --- |
| Tweets downloaded | 873 |
| Retweets | 176 |
| Short tweets | 51 |
| Tweets kept | 646 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3fu0nxex/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @claregrall's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2yox9655) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2yox9655/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/claregrall')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf | RichardErkhov | "2025-02-26T08:18:51Z" | 0 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-02-26T06:42:19Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma-2b-merged-digital-security-and-privacy - GGUF
- Model creator: https://huggingface.co/lightbird-ai/
- Original model: https://huggingface.co/lightbird-ai/gemma-2b-merged-digital-security-and-privacy/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [gemma-2b-merged-digital-security-and-privacy.Q2_K.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q2_K.gguf) | Q2_K | 1.15GB |
| [gemma-2b-merged-digital-security-and-privacy.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.IQ3_XS.gguf) | IQ3_XS | 1.22GB |
| [gemma-2b-merged-digital-security-and-privacy.IQ3_S.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.IQ3_S.gguf) | IQ3_S | 1.27GB |
| [gemma-2b-merged-digital-security-and-privacy.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q3_K_S.gguf) | Q3_K_S | 1.27GB |
| [gemma-2b-merged-digital-security-and-privacy.IQ3_M.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.IQ3_M.gguf) | IQ3_M | 1.3GB |
| [gemma-2b-merged-digital-security-and-privacy.Q3_K.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q3_K.gguf) | Q3_K | 1.36GB |
| [gemma-2b-merged-digital-security-and-privacy.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q3_K_M.gguf) | Q3_K_M | 1.36GB |
| [gemma-2b-merged-digital-security-and-privacy.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q3_K_L.gguf) | Q3_K_L | 1.44GB |
| [gemma-2b-merged-digital-security-and-privacy.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.IQ4_XS.gguf) | IQ4_XS | 1.47GB |
| [gemma-2b-merged-digital-security-and-privacy.Q4_0.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q4_0.gguf) | Q4_0 | 1.52GB |
| [gemma-2b-merged-digital-security-and-privacy.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.IQ4_NL.gguf) | IQ4_NL | 1.53GB |
| [gemma-2b-merged-digital-security-and-privacy.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q4_K_S.gguf) | Q4_K_S | 1.53GB |
| [gemma-2b-merged-digital-security-and-privacy.Q4_K.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q4_K.gguf) | Q4_K | 1.59GB |
| [gemma-2b-merged-digital-security-and-privacy.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q4_K_M.gguf) | Q4_K_M | 1.59GB |
| [gemma-2b-merged-digital-security-and-privacy.Q4_1.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q4_1.gguf) | Q4_1 | 1.64GB |
| [gemma-2b-merged-digital-security-and-privacy.Q5_0.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q5_0.gguf) | Q5_0 | 1.75GB |
| [gemma-2b-merged-digital-security-and-privacy.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q5_K_S.gguf) | Q5_K_S | 1.75GB |
| [gemma-2b-merged-digital-security-and-privacy.Q5_K.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q5_K.gguf) | Q5_K | 1.79GB |
| [gemma-2b-merged-digital-security-and-privacy.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q5_K_M.gguf) | Q5_K_M | 1.79GB |
| [gemma-2b-merged-digital-security-and-privacy.Q5_1.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q5_1.gguf) | Q5_1 | 1.87GB |
| [gemma-2b-merged-digital-security-and-privacy.Q6_K.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q6_K.gguf) | Q6_K | 2.0GB |
| [gemma-2b-merged-digital-security-and-privacy.Q8_0.gguf](https://huggingface.co/RichardErkhov/lightbird-ai_-_gemma-2b-merged-digital-security-and-privacy-gguf/blob/main/gemma-2b-merged-digital-security-and-privacy.Q8_0.gguf) | Q8_0 | 2.59GB |
Original model description:
---
base_model: unsloth/gemma-2-2b-it-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
---
# Uploaded model
- **Developed by:** lightbird-ai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-it-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/NightyGurps-14b-v1.1-GGUF | mradermacher | "2024-11-14T09:11:09Z" | 73 | 1 | transformers | [
"transformers",
"gguf",
"ru",
"base_model:AlexBefest/NightyGurps-14b-v1.1",
"base_model:quantized:AlexBefest/NightyGurps-14b-v1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-13T04:30:02Z" | ---
base_model: AlexBefest/NightyGurps-14b-v1.1
language:
- ru
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/AlexBefest/NightyGurps-14b-v1.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/NightyGurps-14b-v1.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NightyGurps-14b-v1.1-GGUF/resolve/main/NightyGurps-14b-v1.1.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/NightyGurps-14b-v1.1-GGUF/resolve/main/NightyGurps-14b-v1.1.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/NightyGurps-14b-v1.1-GGUF/resolve/main/NightyGurps-14b-v1.1.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NightyGurps-14b-v1.1-GGUF/resolve/main/NightyGurps-14b-v1.1.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/NightyGurps-14b-v1.1-GGUF/resolve/main/NightyGurps-14b-v1.1.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/NightyGurps-14b-v1.1-GGUF/resolve/main/NightyGurps-14b-v1.1.Q4_0_4_4.gguf) | Q4_0_4_4 | 8.6 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/NightyGurps-14b-v1.1-GGUF/resolve/main/NightyGurps-14b-v1.1.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NightyGurps-14b-v1.1-GGUF/resolve/main/NightyGurps-14b-v1.1.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NightyGurps-14b-v1.1-GGUF/resolve/main/NightyGurps-14b-v1.1.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/NightyGurps-14b-v1.1-GGUF/resolve/main/NightyGurps-14b-v1.1.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/NightyGurps-14b-v1.1-GGUF/resolve/main/NightyGurps-14b-v1.1.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NightyGurps-14b-v1.1-GGUF/resolve/main/NightyGurps-14b-v1.1.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
softaken/Softaken-PST-to-MBOX-Converter | softaken | "2025-04-16T09:53:36Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-16T09:52:06Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF | mradermacher | "2024-11-17T09:49:13Z" | 33 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:win10/Verdandi-Qwen2.5-7B",
"base_model:quantized:win10/Verdandi-Qwen2.5-7B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-11-17T08:30:38Z" | ---
base_model: win10/Verdandi-Qwen2.5-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/win10/Verdandi-Qwen2.5-7B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Verdandi-Qwen2.5-7B-i1-GGUF/resolve/main/Verdandi-Qwen2.5-7B.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Qwen-2.5-Sommelier-descriptors-topics-GGUF | mradermacher | "2025-04-17T15:58:41Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:SommelierDuParfum/Qwen-2.5-Sommelier-descriptors-topics",
"base_model:quantized:SommelierDuParfum/Qwen-2.5-Sommelier-descriptors-topics",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-17T15:21:43Z" | ---
base_model: SommelierDuParfum/Qwen-2.5-Sommelier-descriptors-topics
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SommelierDuParfum/Qwen-2.5-Sommelier-descriptors-topics
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-Sommelier-descriptors-topics-GGUF/resolve/main/Qwen-2.5-Sommelier-descriptors-topics.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-Sommelier-descriptors-topics-GGUF/resolve/main/Qwen-2.5-Sommelier-descriptors-topics.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-Sommelier-descriptors-topics-GGUF/resolve/main/Qwen-2.5-Sommelier-descriptors-topics.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-Sommelier-descriptors-topics-GGUF/resolve/main/Qwen-2.5-Sommelier-descriptors-topics.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-Sommelier-descriptors-topics-GGUF/resolve/main/Qwen-2.5-Sommelier-descriptors-topics.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-Sommelier-descriptors-topics-GGUF/resolve/main/Qwen-2.5-Sommelier-descriptors-topics.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-Sommelier-descriptors-topics-GGUF/resolve/main/Qwen-2.5-Sommelier-descriptors-topics.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-Sommelier-descriptors-topics-GGUF/resolve/main/Qwen-2.5-Sommelier-descriptors-topics.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-Sommelier-descriptors-topics-GGUF/resolve/main/Qwen-2.5-Sommelier-descriptors-topics.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-Sommelier-descriptors-topics-GGUF/resolve/main/Qwen-2.5-Sommelier-descriptors-topics.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-Sommelier-descriptors-topics-GGUF/resolve/main/Qwen-2.5-Sommelier-descriptors-topics.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen-2.5-Sommelier-descriptors-topics-GGUF/resolve/main/Qwen-2.5-Sommelier-descriptors-topics.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Zack-Z/llama31_8bi_CoTsft_rs0_1_5cut_hp5_e2 | Zack-Z | "2025-03-18T22:20:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-18T21:36:26Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** Zack-Z
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/MistraMystic-GGUF | mradermacher | "2024-10-11T16:55:58Z" | 183 | 1 | transformers | [
"transformers",
"gguf",
"MistraMystic",
"Conversational AI",
"Personality",
"Persona-dialogue",
"Dialogue-systems",
"Human-like assistant",
"Mistral-7B",
"Mistral",
"en",
"base_model:choco58/MistraMystic",
"base_model:quantized:choco58/MistraMystic",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-17T19:58:08Z" | ---
base_model: choco58/MistraMystic
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- MistraMystic
- Conversational AI
- Personality
- Persona-dialogue
- Dialogue-systems
- Human-like assistant
- Mistral-7B
- Mistral
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/choco58/MistraMystic
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/MistraMystic-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MistraMystic-GGUF/resolve/main/MistraMystic.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/MistraMystic-GGUF/resolve/main/MistraMystic.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/MistraMystic-GGUF/resolve/main/MistraMystic.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MistraMystic-GGUF/resolve/main/MistraMystic.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MistraMystic-GGUF/resolve/main/MistraMystic.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/MistraMystic-GGUF/resolve/main/MistraMystic.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MistraMystic-GGUF/resolve/main/MistraMystic.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MistraMystic-GGUF/resolve/main/MistraMystic.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MistraMystic-GGUF/resolve/main/MistraMystic.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MistraMystic-GGUF/resolve/main/MistraMystic.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MistraMystic-GGUF/resolve/main/MistraMystic.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/MistraMystic-GGUF/resolve/main/MistraMystic.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MistraMystic-GGUF/resolve/main/MistraMystic.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MistraMystic-GGUF/resolve/main/MistraMystic.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MistraMystic-GGUF/resolve/main/MistraMystic.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ahmedheakl/arazn-gemma1.1-7B-eng | ahmedheakl | "2024-07-01T15:27:52Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"translation",
"ar",
"en",
"dataset:ahmedheakl/arzen-llm-dataset",
"arxiv:2406.18120",
"license:mit",
"endpoints_compatible",
"region:us"
] | translation | "2024-04-14T05:05:53Z" | ---
license: mit
datasets:
- ahmedheakl/arzen-llm-dataset
language:
- ar
- en
metrics:
- bleu
- ecody726/bertscore
- meteor
library_name: transformers
pipeline_tag: translation
---
**Please see paper & code for more information:**
- https://github.com/ahmedheakl/arazn-llm
- https://arxiv.org/abs/2406.18120
## Citation
**BibTeX:**
```
@article{heakl2024arzen,
title={ArzEn-LLM: Code-Switched Egyptian Arabic-English Translation and Speech Recognition Using LLMs},
author={Heakl, Ahmed and Zaghloul, Youssef and Ali, Mennatullah and Hossam, Rania and Gomaa, Walid},
journal={arXiv preprint arXiv:2406.18120},
year={2024}
}
```
## Model Card Authors
- Email: [email protected]
- Linkedin: https://linkedin.com/in/ahmed-heakl |
saumitrakapoor/saumitra-kapoor | saumitrakapoor | "2023-02-26T23:30:02Z" | 12 | 2 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-02-26T23:25:54Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Saumitra-Kapoor Dreambooth model trained by saumitrakapoor with [buildspace's DreamBooth](https://colab.research.google.com/github/buildspace/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) notebook
Build your own using the [AI Avatar project](https://buildspace.so/builds/ai-avatar)!
To get started head over to the [project dashboard](https://buildspace.so/p/build-ai-avatars).
Sample pictures of this concept:
|
YakovElm/MariaDB10Classic_Balance_DATA_ratio_3 | YakovElm | "2023-05-31T02:50:25Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-31T02:49:45Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB10Classic_Balance_DATA_ratio_3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB10Classic_Balance_DATA_ratio_3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4529
- Train Accuracy: 0.7861
- Validation Loss: 0.4950
- Validation Accuracy: 0.7615
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5801 | 0.7268 | 0.5206 | 0.7615 | 0 |
| 0.5010 | 0.7809 | 0.5068 | 0.7615 | 1 |
| 0.4529 | 0.7861 | 0.4950 | 0.7615 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
cleanrl/Qbert-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3 | cleanrl | "2023-03-26T02:17:55Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Qbert-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-26T02:17:53Z" | ---
tags:
- Qbert-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Qbert-v5
type: Qbert-v5
metrics:
- type: mean_reward
value: 21432.50 +/- 3557.67
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Qbert-v5**
This is a trained model of a PPO agent playing Qbert-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --env-id Qbert-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Qbert-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/cleanba_impala_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Qbert-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Qbert-v5-cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_impala_envpool_impala_atari_wrapper.py --exp-name cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4 --distributed --learner-device-ids 1 --local-num-envs 30 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Qbert-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 30,
'async_update': 1,
'batch_size': 2400,
'capture_video': False,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Qbert-v5',
'exp_name': 'cleanba_impala_envpool_impala_atari_wrapper_a0_l1_d4',
'gamma': 0.99,
'global_learner_decices': ['gpu:1', 'gpu:3', 'gpu:5', 'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1],
'learner_devices': ['gpu:1'],
'learning_rate': 0.00025,
'local_batch_size': 600,
'local_minibatch_size': 300,
'local_num_envs': 30,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 1200,
'num_envs': 120,
'num_minibatches': 2,
'num_steps': 20,
'num_updates': 20833,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 4}
```
|
akera/whisper-medium-sb-lug-eng | akera | "2024-07-16T13:07:28Z" | 33 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:generator",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-07-14T18:48:04Z" | ---
base_model: openai/whisper-medium
datasets:
- generator
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-medium-sb-lug-eng
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-sb-lug-eng
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1720
- Wer Lug: 0.81
- Wer Eng: 0.068
- Wer Mean: 0.439
- Cer Lug: 0.494
- Cer Eng: 0.039
- Cer Mean: 0.267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 30000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Lug | Wer Eng | Wer Mean | Cer Lug | Cer Eng | Cer Mean |
|:-------------:|:------:|:-----:|:---------------:|:-------:|:-------:|:--------:|:-------:|:-------:|:--------:|
| 0.9804 | 0.0167 | 500 | 0.3683 | 0.692 | 0.043 | 0.368 | 0.203 | 0.019 | 0.111 |
| 0.7775 | 0.0333 | 1000 | 0.2594 | 0.725 | 0.044 | 0.385 | 0.395 | 0.019 | 0.207 |
| 0.6492 | 0.05 | 1500 | 0.2316 | 0.649 | 0.041 | 0.345 | 0.263 | 0.02 | 0.142 |
| 0.6128 | 0.0667 | 2000 | 0.2111 | 0.513 | 0.04 | 0.277 | 0.197 | 0.018 | 0.108 |
| 0.543 | 0.0833 | 2500 | 0.2023 | 0.579 | 0.043 | 0.311 | 0.239 | 0.018 | 0.129 |
| 0.5461 | 0.1 | 3000 | 0.1932 | 0.425 | 0.04 | 0.233 | 0.138 | 0.019 | 0.078 |
| 0.5545 | 0.1167 | 3500 | 0.1836 | 0.624 | 0.043 | 0.334 | 0.381 | 0.021 | 0.201 |
| 0.4895 | 0.1333 | 4000 | 0.1802 | 0.407 | 0.043 | 0.225 | 0.156 | 0.022 | 0.089 |
| 0.4922 | 0.15 | 4500 | 0.1771 | 0.377 | 0.051 | 0.214 | 0.136 | 0.033 | 0.084 |
| 0.521 | 0.1667 | 5000 | 0.1817 | 0.316 | 0.049 | 0.183 | 0.097 | 0.028 | 0.062 |
| 0.3948 | 1.0153 | 5500 | 0.1724 | 0.422 | 0.079 | 0.251 | 0.17 | 0.057 | 0.113 |
| 0.3914 | 1.032 | 6000 | 0.1727 | 0.744 | 0.04 | 0.392 | 0.651 | 0.018 | 0.334 |
| 0.3807 | 1.0487 | 6500 | 0.1730 | 0.585 | 0.053 | 0.319 | 0.428 | 0.028 | 0.228 |
| 0.395 | 1.0653 | 7000 | 0.1701 | 0.737 | 0.043 | 0.39 | 0.635 | 0.024 | 0.329 |
| 0.3774 | 1.082 | 7500 | 0.1654 | 0.545 | 0.046 | 0.296 | 0.396 | 0.024 | 0.21 |
| 0.4017 | 1.0987 | 8000 | 0.1626 | 0.465 | 0.046 | 0.256 | 0.28 | 0.024 | 0.152 |
| 0.3901 | 1.1153 | 8500 | 0.1593 | 0.516 | 0.051 | 0.283 | 0.25 | 0.026 | 0.138 |
| 0.3829 | 1.1320 | 9000 | 0.1608 | 0.48 | 0.049 | 0.264 | 0.247 | 0.024 | 0.135 |
| 0.3536 | 1.1487 | 9500 | 0.1657 | 0.37 | 0.043 | 0.207 | 0.143 | 0.021 | 0.082 |
| 0.3506 | 1.1653 | 10000 | 0.1606 | 0.395 | 0.041 | 0.218 | 0.172 | 0.021 | 0.097 |
| 0.2737 | 2.014 | 10500 | 0.1604 | 0.457 | 0.07 | 0.263 | 0.235 | 0.044 | 0.139 |
| 0.3073 | 2.0307 | 11000 | 0.1626 | 0.458 | 0.046 | 0.252 | 0.243 | 0.022 | 0.132 |
| 0.2906 | 2.0473 | 11500 | 0.1581 | 0.444 | 0.062 | 0.253 | 0.222 | 0.038 | 0.13 |
| 0.2882 | 2.064 | 12000 | 0.1591 | 0.519 | 0.053 | 0.286 | 0.3 | 0.024 | 0.162 |
| 0.2642 | 2.0807 | 12500 | 0.1630 | 0.547 | 0.05 | 0.299 | 0.293 | 0.029 | 0.161 |
| 0.2848 | 2.0973 | 13000 | 0.1627 | 0.509 | 0.055 | 0.282 | 0.244 | 0.03 | 0.137 |
| 0.2887 | 2.114 | 13500 | 0.1585 | 0.524 | 0.067 | 0.296 | 0.28 | 0.047 | 0.163 |
| 0.2879 | 2.1307 | 14000 | 0.1593 | 0.646 | 0.065 | 0.356 | 0.355 | 0.045 | 0.2 |
| 0.2955 | 2.1473 | 14500 | 0.1581 | 0.873 | 0.062 | 0.468 | 0.512 | 0.038 | 0.275 |
| 0.2639 | 2.164 | 15000 | 0.1533 | 0.772 | 0.057 | 0.414 | 0.454 | 0.037 | 0.245 |
| 0.2111 | 3.0127 | 15500 | 0.1622 | 0.776 | 0.074 | 0.425 | 0.518 | 0.046 | 0.282 |
| 0.2299 | 3.0293 | 16000 | 0.1628 | 0.849 | 0.061 | 0.455 | 0.559 | 0.036 | 0.297 |
| 0.2279 | 3.046 | 16500 | 0.1633 | 0.803 | 0.064 | 0.434 | 0.632 | 0.036 | 0.334 |
| 0.2339 | 3.0627 | 17000 | 0.1617 | 0.845 | 0.045 | 0.445 | 0.553 | 0.022 | 0.288 |
| 0.2387 | 3.0793 | 17500 | 0.1599 | 0.773 | 0.055 | 0.414 | 0.436 | 0.029 | 0.232 |
| 0.2098 | 3.096 | 18000 | 0.1616 | 0.675 | 0.059 | 0.367 | 0.45 | 0.037 | 0.243 |
| 0.2201 | 3.1127 | 18500 | 0.1619 | 0.713 | 0.066 | 0.389 | 0.476 | 0.039 | 0.257 |
| 0.2312 | 3.1293 | 19000 | 0.1603 | 0.994 | 0.053 | 0.524 | 0.605 | 0.03 | 0.318 |
| 0.2389 | 3.146 | 19500 | 0.1572 | 0.751 | 0.054 | 0.403 | 0.455 | 0.032 | 0.244 |
| 0.2183 | 3.1627 | 20000 | 0.1635 | 0.667 | 0.056 | 0.362 | 0.42 | 0.034 | 0.227 |
| 0.1707 | 4.0113 | 20500 | 0.1654 | 0.682 | 0.05 | 0.366 | 0.433 | 0.026 | 0.23 |
| 0.1874 | 4.028 | 21000 | 0.1641 | 0.744 | 0.054 | 0.399 | 0.425 | 0.03 | 0.228 |
| 0.1836 | 4.0447 | 21500 | 0.1666 | 0.651 | 0.063 | 0.357 | 0.397 | 0.039 | 0.218 |
| 0.1847 | 4.0613 | 22000 | 0.1635 | 0.788 | 0.069 | 0.429 | 0.502 | 0.044 | 0.273 |
| 0.1742 | 4.078 | 22500 | 0.1651 | 0.695 | 0.051 | 0.373 | 0.4 | 0.027 | 0.214 |
| 0.1733 | 4.0947 | 23000 | 0.1652 | 0.678 | 0.064 | 0.371 | 0.427 | 0.039 | 0.233 |
| 0.1651 | 4.1113 | 23500 | 0.1659 | 0.666 | 0.071 | 0.369 | 0.458 | 0.046 | 0.252 |
| 0.1924 | 4.128 | 24000 | 0.1664 | 0.792 | 0.069 | 0.431 | 0.486 | 0.046 | 0.266 |
| 0.1828 | 4.1447 | 24500 | 0.1670 | 0.746 | 0.068 | 0.407 | 0.538 | 0.043 | 0.291 |
| 0.165 | 4.1613 | 25000 | 0.1675 | 0.746 | 0.072 | 0.409 | 0.469 | 0.047 | 0.258 |
| 0.1437 | 5.01 | 25500 | 0.1706 | 0.728 | 0.066 | 0.397 | 0.481 | 0.04 | 0.261 |
| 0.148 | 5.0267 | 26000 | 0.1700 | 0.755 | 0.069 | 0.412 | 0.457 | 0.041 | 0.249 |
| 0.1509 | 5.0433 | 26500 | 0.1700 | 0.787 | 0.068 | 0.427 | 0.497 | 0.039 | 0.268 |
| 0.1442 | 5.06 | 27000 | 0.1715 | 0.762 | 0.068 | 0.415 | 0.47 | 0.039 | 0.254 |
| 0.1282 | 5.0767 | 27500 | 0.1698 | 0.796 | 0.064 | 0.43 | 0.477 | 0.037 | 0.257 |
| 0.1377 | 5.0933 | 28000 | 0.1710 | 0.796 | 0.068 | 0.432 | 0.481 | 0.04 | 0.261 |
| 0.1456 | 5.11 | 28500 | 0.1719 | 0.758 | 0.07 | 0.414 | 0.481 | 0.04 | 0.26 |
| 0.143 | 5.1267 | 29000 | 0.1716 | 0.795 | 0.07 | 0.433 | 0.488 | 0.04 | 0.264 |
| 0.1484 | 5.1433 | 29500 | 0.1719 | 0.812 | 0.069 | 0.44 | 0.492 | 0.04 | 0.266 |
| 0.1463 | 5.16 | 30000 | 0.1720 | 0.81 | 0.068 | 0.439 | 0.494 | 0.039 | 0.267 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.2.0
- Datasets 2.20.0
- Tokenizers 0.19.1
|
FabioTrindade/Llama-2-7b-hf-Q8_0-GGUF | FabioTrindade | "2025-03-04T13:21:16Z" | 0 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-2",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:quantized:meta-llama/Llama-2-7b-hf",
"license:llama2",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-04T13:20:45Z" | ---
extra_gated_heading: You need to share contact information with Meta to access this
model
extra_gated_prompt: "### LLAMA 2 COMMUNITY LICENSE AGREEMENT\n\"Agreement\" means\
\ the terms and conditions for use, reproduction, distribution and modification\
\ of the Llama Materials set forth herein. \n\"Documentation\" means the specifications,\
\ manuals and documentation accompanying Llama 2 distributed by Meta at https://ai.meta.com/resources/models-and-libraries/llama-downloads/.\
\ \n\"Licensee\" or \"you\" means you, or your employer or any other person or\
\ entity (if you are entering into this Agreement on such person or entity's behalf),\
\ of the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf. \n\"Llama 2\"\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at ai.meta.com/resources/models-and-libraries/llama-downloads/.\n\"Llama\
\ Materials\" means, collectively, Meta's proprietary Llama 2 and documentation\
\ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\
we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\
\ an entity, your principal place of business is in the EEA or Switzerland) and\
\ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland). \n\
\nBy clicking \"I Accept\" below or by using or distributing any portion or element\
\ of the Llama Materials, you agree to be bound by this Agreement.\n1. License Rights\
\ and Redistribution. \na. Grant of Rights. You are granted a non-exclusive, worldwide,\
\ non- transferable and royalty-free limited license under Meta's intellectual property\
\ or other rights owned by Meta embodied in the Llama Materials to use, reproduce,\
\ distribute, copy, create derivative works of, and make modifications to the Llama\
\ Materials. \nb. Redistribution and Use.\ni. If you distribute or make the Llama\
\ Materials, or any derivative works thereof, available to a third party, you shall\
\ provide a copy of this Agreement to such third party. \nii. If you receive Llama\
\ Materials, or any derivative works thereof, from a Licensee as part of an integrated\
\ end user product, then Section 2 of this Agreement will not apply to you. \n\
iii. You must retain in all copies of the Llama Materials that you distribute the\
\ following attribution notice within a \"Notice\" text file distributed as a part\
\ of such copies: \"Llama 2 is licensed under the LLAMA 2 Community License, Copyright\
\ (c) Meta Platforms, Inc. All Rights Reserved.\"\niv. Your use of the Llama Materials\
\ must comply with applicable laws and regulations (including trade compliance\
\ laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials\
\ (available at https://ai.meta.com/llama/use-policy), which is hereby incorporated\
\ by reference into this Agreement.\nv. You will not use the Llama Materials or\
\ any output or results of the Llama Materials to improve any other large language\
\ model (excluding Llama 2 or derivative works thereof). \n\n2. Additional Commercial\
\ Terms. If, on the Llama 2 version release date, the monthly active users of the\
\ products or services made available by or for Licensee, or Licensee's affiliates,\
\ is greater than 700 million monthly active users in the preceding calendar month,\
\ you must request a license from Meta, which Meta may grant to you in its sole\
\ discretion, and you are not authorized to exercise any of the rights under this\
\ Agreement unless or until Meta otherwise expressly grants you such rights.\n\
3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS\
\ AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN \"AS IS\" BASIS, WITHOUT\
\ WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, WITHOUT LIMITATION,\
\ ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A\
\ PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS\
\ OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED\
\ WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation\
\ of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY\
\ OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE,\
\ ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL,\
\ CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS\
\ AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\n\
5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\
\ and in connection with the Llama Materials, neither Meta nor Licensee may use\
\ any name or mark owned by or associated with the other or any of its affiliates,\
\ except as required for reasonable and customary use in describing and redistributing\
\ the Llama Materials.\nb. Subject to Meta's ownership of Llama Materials and derivatives\
\ made by or for Meta, with respect to any derivative works and modifications of\
\ the Llama Materials that are made by you, as between you and Meta, you are and\
\ will be the owner of such derivative works and modifications.\nc. If you institute\
\ litigation or other proceedings against Meta or any entity (including a cross-claim\
\ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 2 outputs\
\ or results, or any portion of any of the foregoing, constitutes infringement\
\ of intellectual property or other rights owned or licensable by you, then any\
\ licenses granted to you under this Agreement shall terminate as of the date such\
\ litigation or claim is filed or instituted. You will indemnify and hold harmless\
\ Meta from and against any claim by any third party arising out of or related \
\ to your use or distribution of the Llama Materials.\n6. Term and Termination.\
\ The term of this Agreement will commence upon your acceptance of this Agreement\
\ or access to the Llama Materials and will continue in full force and effect until\
\ terminated in accordance with the terms and conditions herein. Meta may terminate\
\ this Agreement if you are in breach of any term or condition of this Agreement.\
\ Upon termination of this Agreement, you shall delete and cease use of the Llama\
\ Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.\
\ \n7. Governing Law and Jurisdiction. This Agreement will be governed and construed\
\ under the laws of the State of California without regard to choice of law principles,\
\ and the UN Convention on Contracts for the International Sale of Goods does not\
\ apply to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement. \n### Llama 2 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 2. If you access or use Llama 2, you agree to this Acceptable Use Policy\
\ (“Policy”). The most recent copy of this policy can be found at [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).\n\
#### Prohibited Uses\nWe want everyone to use Llama 2 safely and responsibly. You\
\ agree you will not use, or allow others to use, Llama 2 to:\n1. Violate the law\
\ or others’ rights, including to:\n 1. Engage in, promote, generate, contribute\
\ to, encourage, plan, incite, or further illegal or unlawful activity or content,\
\ such as: \n 1. Violence or terrorism \n 2. Exploitation or harm\
\ to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4.\
\ The illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6.\
\ Any other criminal activity\n 2. Engage in, promote, incite, or facilitate\
\ the harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n \
\ 4. Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices \n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any\
\ action or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama 2 Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system \n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Llama 2 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Llama 2 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Llama 2 or outputs are human-generated\n\
\ 6. Generating or facilitating false online engagement, including fake reviews\
\ and other means of fake online engagement \n 4. Fail to appropriately disclose\
\ to end users any known dangers of your AI system \nPlease report any violation\
\ of this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means: \n * Reporting issues with\
\ the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)\n\
\ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)\n\
\ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)\
\ \n * Reporting violations of the Acceptable Use Policy or unlicensed uses of\
\ Llama: [[email protected]](mailto:[email protected])"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-2
- llama-cpp
- gguf-my-repo
license: llama2
base_model: meta-llama/Llama-2-7b-hf
---
# FabioTrindade/Llama-2-7b-hf-Q8_0-GGUF
This model was converted to GGUF format from [`meta-llama/Llama-2-7b-hf`](https://huggingface.co/meta-llama/Llama-2-7b-hf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Llama-2-7b-hf) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo FabioTrindade/Llama-2-7b-hf-Q8_0-GGUF --hf-file llama-2-7b-hf-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo FabioTrindade/Llama-2-7b-hf-Q8_0-GGUF --hf-file llama-2-7b-hf-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo FabioTrindade/Llama-2-7b-hf-Q8_0-GGUF --hf-file llama-2-7b-hf-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo FabioTrindade/Llama-2-7b-hf-Q8_0-GGUF --hf-file llama-2-7b-hf-q8_0.gguf -c 2048
```
|
Subsets and Splits