second Llama-4-Scout-17B-16E-Instruct-abliterated-v2

#1028
by jacek2024 - opened

It's queued! :D
Nice to see an improved version of it. Thanks for the recommendation!

You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#Llama-4-Scout-17B-16E-Instruct-abliterated-v2-GGUF for quants to appear.

does it mean it's died this time? :)

    -2000  218 si Llama-4-Scout-17B-16E-Instruct-abliterated-v2 error/1 mmproj extraction hf Q8_0

Yes but it is fixable. We just need to provide preprocessor_config.json manually.

INFO:hf-to-gguf:Loading model: Llama-4-Scout-17B-16E-Instruct-abliterated-v2
INFO:hf-to-gguf:Model architecture: Llama4ForConditionalGeneration
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
Traceback (most recent call last):
  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 6533, in <module>
    main()
  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 6512, in main
    model_instance = model_class(dir_model, output_type, fname_out,
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 1176, in __init__
    with open(self.dir_model / "preprocessor_config.json", "r", encoding="utf-8") as f:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'Llama-4-Scout-17B-16E-Instruct-abliterated-v2/preprocessor_config.json'

I fixed it: cp -a /bpool/Llama-4-Scout-17B-16E-Instruct/preprocessor_config.json /mradermacher/tmp/quant/Llama-4-Scout-17B-16E-Instruct-abliterated-v2/. preprocessor_config.json is not something that ever gets modified in derivate models and in the highly unlikely case that someone actually modifies this file I'm sure it would be provided so it is safe to just copy it from the base model if the file is missing.

Unfortunately we are currently computing the imatrix of r1-1776 and so Llama-4-Scout-17B-16E-Instruct-abliterated-v2 will have to wait until this is done which might take around 12 hours as we don't currently have the RAM required for MMPROJ extraction.

2  713 r1-1776 run/imatrix (GPU-2d) / 294.03s/c 150.2/1543.7m(1101.9-2629.1) [18/315] 5.9459
-2000  218 si Llama-4-Scout-17B-16E-Instruct-abliterated-v2 blocked/admin/vision

Oh, that was my problem, I didn't save the processor when I tried to make a new version, now it's fixed. Actually it's the same as the original model's processor, hope this doesn't cause trouble anymore

Sign up or log in to comment