🧠 CooperLM-354M (4-bit Quantized)

This is a 4-bit quantized version of CooperLM-354M, a 354M parameter GPT-2 style language model trained from scratch on a subset of Wikipedia, BookCorpus, and OpenWebText.

The quantized model is intended for faster inference and smaller memory footprint, especially useful for CPU or limited-GPU setups.


πŸ“Œ Model Details

  • Base Model: mehta/CooperLM-354M
  • Architecture: GPT-2 (24 layers, 16 heads, 1024 hidden size)
  • Quantization: 4-bit integer weights via AutoGPTQ (safetensors)
  • Precision: INT4

πŸ› οΈ How to Use

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("mehta/CooperLM-354M-4bit")
model = AutoModelForCausalLM.from_pretrained("mehta/CooperLM-354M-4bit")

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

prompt = "In the distant future,"
inputs = tokenizer(prompt, return_tensors="pt").to(device)

outputs = model.generate(
    **inputs,
    max_length=100,
    temperature=0.8,
    top_p=0.95,
    do_sample=True
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
6
Safetensors
Model size
208M params
Tensor type
F32
Β·
F16
Β·
U8
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for mehta/CooperLM-354M-4bit

Quantized
(2)
this model