Qwen3-8B-Translator-LoRA

This model is a fine-tuned version of Qwen/Qwen3-8B using LoRA for English to Chinese translation, specifically tailored for audio product terminology.

Fine-tuning Details

  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • Dataset: Custom parallel corpus for audio products (English-Chinese)
  • Framework: PyTorch, Hugging Face Transformers, TRL, PEFT, Optimum TPU
  • Hardware: Google Cloud TPU v3-8

Training Procedure

The model was trained using the SFTTrainer from the TRL library.

Training Hyperparameters

  • max_seq_length: 768
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • num_train_epochs: 10
  • eval_strategy: "steps"
  • eval_steps: 10
  • learning_rate: 2e-5
  • lr_scheduler_type: "cosine"
  • warmup_ratio: 0.1
  • weight_decay: 0.01
  • optim: "adamw_torch_xla"

LoRA Configuration

  • r: 128
  • lora_alpha: 256
  • lora_dropout: 0.05
  • bias: "none"
  • target_modules: ["q_proj", "v_proj", "gate_proj", "down_proj"]
  • modules_to_save: ["lm_head", "embed_tokens"]

Training Results

Step Training Loss Validation Loss
10 1.093800 0.855263
20 0.777300 0.644428
30 0.593800 0.520456
40 0.445300 0.459498
50 0.404300 0.417660
60 0.289100 0.402447
70 0.308600 0.388980
80 0.259800 0.369449
90 0.215800 0.368935
100 0.229500 0.359940
110 0.150400 0.388569
120 0.126000 0.395148
130 0.124000 0.387644

Intended Use

This model is intended for translating English text related to audio products into Chinese. It can be used by professionals in the audio industry, technical writers, or anyone needing to translate such content.

Limitations and Bias

  • The model's performance is best on text similar to the data it was trained on (audio product domain).
  • It may not generalize well to other domains or highly colloquial language.
  • As with any language model, there's a potential for biases present in the training data to be reflected in the output.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for nananatsu/Qwen3-8B-Translator-LoRA

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Adapter
(61)
this model