New version fine tuned for 2 epochs from Qwen/Qwen3-4B

Trained for 2 epochs on soob3123/amoral_reasoning dataset.

Below the Eval/Train loss graph

Uploaded model

  • Developed by: fakezeta
  • License: apache-2.0
  • Finetuned from model : Qwen/Qwen3-4B
  • Dataset: soob3123/amoral_reasoning

This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
35
Safetensors
Model size
4.02B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for fakezeta/amoral-Qwen3-4B

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Finetuned
(38)
this model
Quantizations
3 models

Dataset used to train fakezeta/amoral-Qwen3-4B