Qwen3-32B-abliterated

#1021
by jacek2024 - opened

It's queued! :D

You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#Qwen3-32B-abliterated-GGUF for quants to appear.

looks like it died?

What you mean with died? It is done and all quants are avilable under https://hf.tst.eu/model#Qwen3-32B-abliterated-GGUF

Yes I see them, but nothing on huggingface?

Ah I see what happened. roslein/Qwen3-32B-abliterated was first so and huihui-ai/Qwen3-32B-abliterated uses the same name. Ouer system doesn't allow to quant models with the same name so it probably just skiped huihui-ai/Qwen3-32B-abliterated.

Ah I see what happened. roslein/Qwen3-32B-abliterated was first so and huihui-ai/Qwen3-32B-abliterated uses the same name. Ouer system doesn't allow to quant models with the same name so it probably just skiped huihui-ai/Qwen3-32B-abliterated.

@nicoboss Can you please create the version of huihui-ai? Maybe just delete the non-functional roslein version or create a new huihui with a customized name. Thanks for all your work.

the huihui-ai version is also missing for Qwen3-30B-A3B-abliterated-GGUF (https://huggingface.co/huihui-ai/Qwen3-30B-A3B-abliterated).
Your current one is the mlabonne version. With multiple versions it is relatively chaotic without the use of the specific names of the creators, something like prefix or postfix would be good so that everything is clear and unambiguous. Maybe always use the first 3 char of a creator as postfix. for example: abc(ros), abc(hui), abc(mla)

@inputout Please just use https://huggingface.co/spaces/huggingface-projects/repo_duplicator to dublicate them to your account and name them in any unique way you wish and we will quant them. Dublicate repo name is not at all a common issue and having the same name with only a suffix added is too convinient to give up.

Look like huihui started adding prefix to his models
@huihui-ai could you also rename older ones?

Look like huihui started adding prefix to his models
@huihui-ai could you also rename older ones?

Yes that would definitely be the best and clearest solution! repo_duplicator is somehow a cumbersome unsightly solution.
The old "important" models like the ones mentioned are perfectly adequate. Would that be possible @huihui-ai ?

I have a suggestion. We can use "huihui-ai/Qwen3-32B-abliterated" as the keyword. In the future, we can add the "huihui" mark. Only you guys need not find the name too long.

I have a suggestion. We can use "huihui-ai/Qwen3-32B-abliterated" as the keyword. In the future, we can add the "huihui" mark. Only you guys need not find the name too long.

I can only speak for myself, the length doesn't matter, it's good if there is precise information about the creator in the file name. I even rename this manually if it is missing (first 3 char). If it is specific and unambiguously assignable, then the automatic processing mechanisms of gguf-creators like mradermacher become more robust.

Length does matter - I've had more problems with too long model names than with collisions, so extending the model name even further is not something I plan to do.

I think creators should just get their act together and make better model names. There's no point in having ten Qwen3-32B-abliterated models - what are their difference,s how do you even remember that? Out in the world, people refer to models by their name, too, so naming the models you want others to use the same way as existing models is a disservice to your users.

huihui-ai is in a special bad position for another reason, since they started to gate all their models, which means they automatically take lower priority when quantising, because we don't typically quantise gated models unless specifically requested.

what are their difference,s

Some work and some don't. As in this case, a non-functioning model can block a probably functioning model.
It's a waste if the potentially best model ( @huihui-ai ) can't be used by anyone. That's why i think different names are good.
Short names (like three chars) are sufficient for clear assignability and are of course better than very long names.

@inputout As I said, I have no issues deleting a nonfunctional model, but that requires evidence. Lots of people say a model is nonfunctional just because they didn't like their reply. We have no evidence that the mlabonne model is nonfunctional - do you have some?

And as I pointed out, I also agree that different names are good, that's why nico told you how to achieve that. And I disagree that three char names are sufficient for anything, even ignoring the problem that they don't exist.

We can see that all new models from @huihui-ai have prefix, so let's hope he will also update old models and it's a good idea to discuss with other creators to also use prefix

@inputout As I said, I have no issues deleting a nonfunctional model, but that requires evidence. Lots of people say a model is nonfunctional just because they didn't like their reply. We have no evidence that the mlabonne model is nonfunctional - do you have some?

It was about the roslein model, of course there will never be clear evidence, everything is too subjective, but e.g. there is other discussions (https://huggingface.co/roslein/Qwen3-32B-abliterated/discussions/1) and this thread was probably created to get another version of these model because of the function of the existing. If all exist in parallel, it's best anyway (deleting was just an idea to solve the problem “the first one blocks the following ones” because it seemed that only one version can exist). In the best case there is diversity and all are available.

Ouer system doesn't allow to quant models with the same name so it probably just skiped huihui-ai/Qwen3-32B-abliterated.

Just brainstorming, allow the system to use same model name with different creator name and the system e.g simply add a “+” or increase a number if target-name is occupied. No more dependencies on the naming of the creators. Ok I am silent now ;-) Thanks for all your work.

@inputout the roslein model is a great example where the model is clearly not "nonfunctional" - just some user randomly disliking the answers.

Just brainstorming

I've already explained that increasing the model name is not an option. And just changing names also doesn't help anybody.

In the best case there is diversity and all are available.

The models are available whether we quantize them or not. It was never out goal to quantize everything (well, not speaking for richard :)

Sign up or log in to comment