Can't run on a single H100

#27
by jvieirasobrinho - opened

I've been trying to run Llama-4-Scout-17B-16E on a single H100 but I keep getting the "CUDA out of memory" error. I'm not sure if I'm getting the quantization part right. I've been keeping an eye open on nvidia-smi as the model loads but memory usage seems under control. Could someone please advise?

from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch

model_name = "meta-llama/Llama-4-Scout-17B-16E"

bnb_config = BitsAndBytesConfig(load_in_4bit=True)

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map="auto"
)

prompt = "Explain the theory of relativity in simple terms."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(
**inputs,
max_new_tokens=200,
do_sample=True,
top_p=0.9,
temperature=0.7
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("\nResponse:\n", response)

Thanks in advance!

Even on 2 H100 it is not working. They lied to us.

@kingabzpro that does seem to be the case... I've also tried with 2 H100s, but still no luck. 😕

@jvieirasobrinho I even tried on H200. No luck. I guess I will try 2 H200 next. Man I am loosing money on Runpod.

Same test on my side with instruct version and int4 quantization on 1 or 2 h100 doesn't work

Same here, out of memory running in single H100 with vLLM. ( torch.OutOfMemoryError: CUDA out of memory)

hey there

I managed to get it to run using 4 x H100, Its quite hard om memory it seems. But NVIDIA is releasing a NIM next week as well I heard from my TAM.
sudo docker run --runtime=nvidia --gpus all --shm-size=64g
--name llamsa4scoutinstruct
-v ~/.cache/huggingface:/root/.cache/huggingface
--env HUGGING_FACE_HUB_TOKEN=$HUGGING_FACE_HUB_TOKEN
-e PYTORCH_NO_CUDA_MEMORY_CACHING=1
-e PYTORCH_CUDA_ALLOC_CONF="max_split_size_mb:128,expandable_segments:True"
-e VLLM_USE_TENSOR_PARALLEL=true
-e VLLM_NUM_GPUS=4
--ulimit memlock=-1
--ulimit stack=67108864
--ipc=host
-p 8000:8000
vllm/vllm-openai:latest
--model meta-llama/Llama-4-Scout-17B-16E-Instruct
--tensor-parallel-size 4
--max-model-len 4096
--max-num-seqs 2
--enforce-eager
--disable-log-stats

nvidia-smi

Every 2.0s: nvidia-smi zajgf-dsg-llm-4: Thu Jun 12 08:35:25 2025

Thu Jun 12 08:35:25 2025
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 570.148.08 Driver Version: 570.148.08 CUDA Version: 12.8 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA H100XM-80C On | 00000000:03:00.0 Off | N/A |
| N/A N/A P0 N/A / N/A | 56693MiB / 81920MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA H100XM-80C On | 00000000:03:01.0 Off | N/A |
| N/A N/A P0 N/A / N/A | 56421MiB / 81920MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA H100XM-80C On | 00000000:03:02.0 Off | N/A |
| N/A N/A P0 N/A / N/A | 56421MiB / 81920MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA H100XM-80C On | 00000000:03:03.0 Off | N/A |
| N/A N/A P0 N/A / N/A | 56501MiB / 81920MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 16544 C /usr/bin/python3 56690MiB |
| 1 N/A N/A 16545 C /usr/bin/python3 56418MiB |
| 2 N/A N/A 16546 C /usr/bin/python3 56418MiB |
| 3 N/A N/A 16547 C /usr/bin/python3 56498MiB |

Sign up or log in to comment