CS-552 Phase 2 MCQA Fine-tuning Model

This model is part of the CS-552 course project for quantized language models.

Model Details

  • Base Model: Qwen/Qwen3-0.6B-Base
  • Training Phase: Phase 2 MCQA Fine-tuning
  • Dataset: simplescaling/s1K-1.1_tokenized
  • Training Method: Post-training quantization

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("charlottemeyer/qwen3-0.6b-quantized-phase2-mcqa")
tokenizer = AutoTokenizer.from_pretrained("charlottemeyer/qwen3-0.6b-quantized-phase2-mcqa")

# Generate response
inputs = tokenizer("Your question here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

Training Details

  • Trained for CS-552 course evaluation
  • Optimized for reasoning and MCQA tasks
  • Uses answer-first format for logit extraction
Downloads last month
81
Safetensors
Model size
596M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for charlottemeyer/qwen3-0.6b-quantized-phase2-mcqa

Finetuned
(283)
this model