Qwen3-1.7B-pfn-qfin

Model Description

Qwen3-1.7B-pfn-qfin is an fine-tuned model based on Qwen/Qwen3-1.7B-Base. This is the base model, which is good at generating continuous sentences. Qwen3-1.7B-pfn-qfin is fine-tuned on about 400M tokens from multiple special datasets generated by Preferred Networks, which is clear to use for commercial usage. The fine-tuned were carried out at a 2048 context length. This model is released under PLaMo Community License.

Benchmarking

The benchmark score is obtained using Japanese Language Model Financial Evaluation Harness For the benchmark, 0-shot and default prompts are used.

Task Metric Qwen3-1.7B Ours
chabsa f1 0.5734 0.7116
cma_basics acc 0.3158 0.5263
cpa_audit acc 0.1583 0.1884
fp2 acc 0.4737 0.4912
security_sales_1 acc 0.2421 0.3389
---------------- ------ ------ ------
OVER ALL 0.3527 0.4513

Usage

Install the required libraries as follows:

>>> python -m pip install "transformers>=4.51.0"

Execute the following python code:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("pfnet/Qwen3-1.7B-pfn-qfin", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("pfnet/Qwen3-1.7B-pfn-qfin", device_map="auto", trust_remote_code=True)
text = "日本銀行は"
input_ids = tokenizer(text, return_tensors="pt").input_ids.to(model.device)
with torch.no_grad():
  generated_tokens = model.generate(
      inputs=input_ids,
      max_new_tokens=32,
      do_sample=True,
      top_k=50,
      top_p=0.95,
      temperature=1.0,
      pad_token_id=tokenizer.pad_token_id,
      bos_token_id=tokenizer.bos_token_id,
      eos_token_id=tokenizer.eos_token_id
  )[0]
generated_text = tokenizer.decode(generated_tokens)
print(generated_text)

Bias, Risks, and Limitations

Qwen3-1.7B-pfn-qfin is a new technology that carries risks with use. Testing conducted to date has been in English and Japanese, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Qwen3-1.7B-pfn-qfin’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. This model is not designed for legal, tax, investment, financial, or other advice. Therefore, before deploying any applications of Qwen3-1.7B-pfn-qfin, developers should perform safety testing and tuning tailored to their specific applications of the model.

Authors

Preferred Networks, Inc.

  • Masanori Hirano
  • Kentaro Imajo
  • Takeshi Masuko

License

PLaMo Community License

Downloads last month
10
Safetensors
Model size
1.72B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for pfnet/Qwen3-1.7B-pfn-qfin

Finetuned
(39)
this model