narukijima commited on
Commit
223c6f0
·
verified ·
1 Parent(s): 1768ed1

replace card (plain text + minimal YAML header)

Browse files
Files changed (1) hide show
  1. README.md +25 -0
README.md CHANGED
@@ -1,3 +1,11 @@
 
 
 
 
 
 
 
 
1
  # pioneer-mini-v1
2
 
3
  **Technical notes**
@@ -9,3 +17,20 @@
9
  - Data used: neutral_examples=86376, pairs_used=14398
10
  - Source files: `narukijima/pioneer` → `P_instruction_pairs_en.jsonl`, `P_instruction_pairs_ja.jsonl`
11
  - Inference: use base tokenizer & chat template
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ base_model: openai/gpt-oss-20b
4
+ language: [en, ja]
5
+ pipeline_tag: text-generation
6
+ tags: []
7
+ ---
8
+
9
  # pioneer-mini-v1
10
 
11
  **Technical notes**
 
17
  - Data used: neutral_examples=86376, pairs_used=14398
18
  - Source files: `narukijima/pioneer` → `P_instruction_pairs_en.jsonl`, `P_instruction_pairs_ja.jsonl`
19
  - Inference: use base tokenizer & chat template
20
+
21
+ **Quick inference**
22
+
23
+ ```python
24
+ from transformers import AutoTokenizer, AutoModelForCausalLM
25
+ import torch
26
+ M = "narukijima/pioneer-mini-v1"
27
+ tok = AutoTokenizer.from_pretrained(M, trust_remote_code=True)
28
+ mdl = AutoModelForCausalLM.from_pretrained(
29
+ M, torch_dtype=torch.bfloat16, device_map='auto', trust_remote_code=True
30
+ )
31
+ msgs = [{"role":"user","content":"test"}]
32
+ p = tok.apply_chat_template(msgs, tokenize=False, add_generation_prompt=True)
33
+ out = mdl.generate(**tok(p, return_tensors='pt').to(mdl.device),
34
+ max_new_tokens=64, do_sample=True, temperature=0.7)
35
+ print(tok.decode(out[0], skip_special_tokens=True))
36
+ ```