Qwen3-MegaScience
Collection
Qwen3-MegaScience
•
5 items
•
Updated
•
1
This repository contains the Qwen3-14B-MegaScience model, a large language model fine-tuned on the MegaScience dataset for enhanced scientific reasoning.
Project Link: https://huggingface.co/MegaScience (Hugging Face Organization for MegaScience project)
Code Repository: https://github.com/GAIR-NLP/lm-open-science-evaluation
You can use this model with the transformers
library for text generation:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "MegaScience/Qwen3-14B-MegaScience"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16, # or torch.float16 if bfloat16 is not supported
device_map="auto"
)
messages = [
{"role": "system", "content": "You are a helpful and knowledgeable assistant."},
{"role": "user", "content": "Explain the concept of quantum entanglement in simple terms."}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer(text, return_tensors="pt").to(model.device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.9,
eos_token_id=tokenizer.eos_token_id,
)
response = tokenizer.decode(generated_ids[0][model_inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(response)
If you use our dataset or find our work useful, please cite
@article{fan2025megascience,
title={MegaScience: Pushing the Frontiers of Post-Training Datasets for Science Reasoning},
author={Fan, Run-Ze and Wang, Zengzhi and Liu, Pengfei},
year={2025},
journal={arXiv preprint arXiv:2507.16812},
url={https://arxiv.org/abs/2507.16812}
}