Vfrz nielsr HF Staff commited on
Commit
41cb964
·
verified ·
1 Parent(s): 398234b

Improve model card: Add library_name, GitHub link, abstract, and usage example (#1)

Browse files

- Improve model card: Add library_name, GitHub link, abstract, and usage example (de00b8dcf3d3e71a5c562319ae31639b6e55390d)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +42 -4
README.md CHANGED
@@ -1,17 +1,23 @@
1
  ---
2
- license: apache-2.0
 
3
  datasets:
4
  - MegaScience/MegaScience
5
  language:
6
  - en
 
7
  metrics:
8
  - accuracy
9
- base_model:
10
- - Qwen/Qwen2.5-7B
11
  pipeline_tag: text-generation
 
12
  ---
 
13
  # [MegaScience: Pushing the Frontiers of Post-Training Datasets for Science Reasoning](https://arxiv.org/abs/2507.16812)
14
 
 
 
 
 
15
  ## Qwen2.5-7B-MegaScience
16
 
17
  ### Training Recipe
@@ -33,6 +39,38 @@ pipeline_tag: text-generation
33
  <img src="https://cdn-uploads.huggingface.co/production/uploads/616bfc2b40e2f69baa1c7add/xFTJ7nevc3S4UYJxUS7ue.png" alt="Data Pipeline" style="width:80%;">
34
  </div>
35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  ### More about MegaScience
37
 
38
  <div style="display: flex; justify-content: left; gap: 20px;">
@@ -51,4 +89,4 @@ Check out our [paper](https://arxiv.org/abs/2507.16812) for more details. If you
51
  journal={arXiv preprint arXiv:2507.16812},
52
  url={https://arxiv.org/abs/2507.16812}
53
  }
54
- ```
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-7B
4
  datasets:
5
  - MegaScience/MegaScience
6
  language:
7
  - en
8
+ license: apache-2.0
9
  metrics:
10
  - accuracy
 
 
11
  pipeline_tag: text-generation
12
+ library_name: transformers
13
  ---
14
+
15
  # [MegaScience: Pushing the Frontiers of Post-Training Datasets for Science Reasoning](https://arxiv.org/abs/2507.16812)
16
 
17
+ Scientific reasoning is critical for developing AI scientists and supporting human researchers in advancing the frontiers of natural science discovery. This work introduces **TextbookReasoning**, an open dataset featuring truthful reference answers extracted from 12k university-level scientific textbooks, comprising 650k reasoning questions. It further presents **MegaScience**, a large-scale mixture of high-quality open-source datasets totaling 1.25 million instances, developed through systematic ablation studies. Models trained on MegaScience demonstrate superior performance and training efficiency, significantly outperforming corresponding official instruct models, especially for larger and stronger base models.
18
+
19
+ Find the code and more details on the [MegaScience GitHub repository](https://github.com/GAIR-NLP/lm-open-science-evaluation).
20
+
21
  ## Qwen2.5-7B-MegaScience
22
 
23
  ### Training Recipe
 
39
  <img src="https://cdn-uploads.huggingface.co/production/uploads/616bfc2b40e2f69baa1c7add/xFTJ7nevc3S4UYJxUS7ue.png" alt="Data Pipeline" style="width:80%;">
40
  </div>
41
 
42
+ ## Quickstart
43
+
44
+ You can use this model directly with the `transformers` library:
45
+
46
+ ```python
47
+ from transformers import AutoModelForCausalLM, AutoTokenizer
48
+ import torch
49
+
50
+ model_id = "MegaScience/Qwen2.5-7B-MegaScience"
51
+
52
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
53
+ model = AutoModelForCausalLM.from_pretrained(
54
+ model_id,
55
+ torch_dtype=torch.bfloat16, # or torch.float16 if bfloat16 is not supported
56
+ device_map="auto"
57
+ )
58
+
59
+ messages = [
60
+ {"role": "user", "content": "What is the capital of France?"},
61
+ ]
62
+
63
+ text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
64
+ model_inputs = tokenizer(text, return_tensors="pt").to(model.device)
65
+
66
+ generated_ids = model.generate(
67
+ model_inputs.input_ids,
68
+ max_new_tokens=256
69
+ )
70
+ generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
71
+ print(generated_text)
72
+ ```
73
+
74
  ### More about MegaScience
75
 
76
  <div style="display: flex; justify-content: left; gap: 20px;">
 
89
  journal={arXiv preprint arXiv:2507.16812},
90
  url={https://arxiv.org/abs/2507.16812}
91
  }
92
+ ```