Csaba  Kecskemeti's picture

Csaba Kecskemeti PRO

csabakecskemeti

AI & ML interests

None yet

Recent Activity

Organizations

Zillow's profile picture DevQuasar's profile picture Hugging Face Party @ PyTorch Conference's profile picture Intelligent Estate's profile picture open/ acc's profile picture

csabakecskemeti's activity

posted an update 9 days ago
posted an update 10 days ago
posted an update 25 days ago
view post
Post
3357
I'm collecting llama-bench results for inference with a llama 3.1 8B q4 and q8 reference models on varoius GPUs. The results are average of 5 executions.
The system varies (different motherboard and CPU ... but that probably that has little effect on the inference performance).

https://devquasar.com/gpu-gguf-inference-comparison/
the exact models user are in the page

I'd welcome results from other GPUs is you have access do anything else you've need in the post. Hopefully this is useful information everyone.
posted an update 27 days ago
view post
Post
2381
Managed to get my hands on a 5090FE, it's beefy

| llama 8B Q8_0 | 7.95 GiB | 8.03 B | CUDA | 99 | pp512 | 12207.44 ยฑ 481.67 |
| llama 8B Q8_0 | 7.95 GiB | 8.03 B | CUDA | 99 | tg128 | 143.18 ยฑ 0.18 |

Comparison with others GPUs
http://devquasar.com/gpu-gguf-inference-comparison/
replied to their post 30 days ago
view reply

Follow-up

With the smaller context length dataset the training has succeeded.

posted an update about 1 month ago
reacted to clem's post with ๐Ÿš€ about 1 month ago
view post
Post
4640
We just crossed 1,500,000 public models on Hugging Face (and 500k spaces, 330k datasets, 50k papers). One new repository is created every 15 seconds. Congratulations all!
ยท
posted an update about 1 month ago
replied to their post about 1 month ago
view reply

No success so far, the training data contains some larger contexts and it fails just before complete the first epoch.
(dataset: DevQuasar/brainstorm-v3.1_vicnua_1k)

If anyone has further suggestion to the bnb config (with ROCm on MI100)?
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)

Now testing with my other dataset that is smaller seems I have a lower memory need
DevQuasar/brainstorm_vicuna_1k

replied to their post about 1 month ago
view reply

It's failed by the morning, need to find more room to decrease the memory

replied to their post about 1 month ago
view reply

The machine itself is also funny. This my my GPU test bench.
Now also testing the PWM fan control and jetkvm

IMG_7216.jpg

posted an update about 1 month ago
view post
Post
827
Fine tuning on the edge. Pushing the MI100 to it's limits.
QWQ-32B 4bit QLORA fine tuning
VRAM usage 31.498G/31.984G :D

  • 4 replies
ยท
replied to their post about 2 months ago
replied to their post about 2 months ago
view reply

Updated the post with GGUF (Q4,Q8) performance metrics

replied to their post about 2 months ago
view reply

Good callout will add this evening
Llama 3 8b q8 was around 80t/s generation

posted an update about 2 months ago
view post
Post
1967
-UPDATED-
4bit inference is working! The blogpost is updated with code snippet and requirements.txt
https://devquasar.com/uncategorized/all-about-amd-and-rocm/
-UPDATED-
I've played around with an MI100 and ROCm and collected my experience in a blogpost:
https://devquasar.com/uncategorized/all-about-amd-and-rocm/
Unfortunately I've could not make inference or training work with model loaded in 8bit or use BnB, but did everything else and documented my findings.
  • 4 replies
ยท
replied to their post about 2 months ago
replied to their post about 2 months ago
view reply

So far I'm managed to have a working bnb up:

(bnbtest) kecso@gpu-testbench2:~/bitsandbytes/examples$ python -m bitsandbytes
g++ (Ubuntu 14.2.0-4ubuntu2) 14.2.0
Copyright (C) 2024 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++ BUG REPORT INFORMATION ++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++ OTHER +++++++++++++++++++++++++++
ROCm specs: rocm_version_string='63', rocm_version_tuple=(6, 3)
PyTorch settings found: ROCM_VERSION=63
The directory listed in your path is found to be non-existent: local/gpu-testbench2
The directory listed in your path is found to be non-existent: @/tmp/.ICE-unix/2803,unix/gpu-testbench2
The directory listed in your path is found to be non-existent: /etc/xdg/xdg-ubuntu
The directory listed in your path is found to be non-existent: /org/gnome/Terminal/screen/6bd83ab2_fd9f_4990_876a_527ef8117ef6
The directory listed in your path is found to be non-existent: //debuginfod.ubuntu.com
WARNING! ROCm runtime files not found in any environmental path.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Checking that the library is importable and ROCm is callable...
SUCCESS!
Installation was successful!

It's able to load the model to vram, but inference fails:
Exception: cublasLt ran into an error!

This is the main problem with anything not NVIDIA. The software is painful!
Keep trying...

reacted to stefan-it's post with ๐Ÿ‘ about 2 months ago
view post
Post
5098
She arrived ๐Ÿ˜

[Expect more models soon...]
  • 2 replies
ยท
replied to their post about 2 months ago
view reply

All agreed.
I think I have decent amount of NVIDIA GPUs at home (a server with dual v100+p40, and a workstation with 4080 + 2x3090) so I was primarily just curious how much suffering is to use AMD.

Obviously the biggest issue is software. You have to "hack" together the ROCm versions of the things. Every simple step on NVIDIA echosystem trurns to a mini investigation on AMD :)

That's the main reason I'll put together a blogpost with all instructions to make it easier for others.

So far I've made the testbench post (there were some bios settings needed for the card to work - only on MI100, not on MI50 - ), setup ROCm drivers and pytorch for ROCm, managed to run inference with Llama.cpp, and run a 20 epoch LORA on the f32 Llama3.2 3B, and producing a model .

More details later this week in a blogpost