Model does *NOT* work.
#1
by
ZeroWw
- opened
config.json: 100%
730/730 [00:00<00:00, 23.9kB/s]
merges.txt:
1.67M/? [00:00<00:00, 59.0kB/s]
added_tokens.json: 100%
707/707 [00:00<00:00, 20.7kB/s]
special_tokens_map.json: 100%
613/613 [00:00<00:00, 18.5kB/s]
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/tmp/ipython-input-2433247585.py in <cell line: 0>()
3 model_name = "qihoo360/Light-IF-32B"
4
----> 5 tokenizer = AutoTokenizer.from_pretrained(model_name)
6 model = AutoModelForCausalLM.from_pretrained(
7 model_name,
4 frames
/usr/local/lib/python3.11/dist-packages/transformers/models/qwen2/tokenization_qwen2.py in __init__(self, vocab_file, merges_file, errors, unk_token, bos_token, eos_token, pad_token, clean_up_tokenization_spaces, split_special_tokens, **kwargs)
170 )
171
--> 172 with open(vocab_file, encoding="utf-8") as vocab_handle:
173 self.encoder = json.load(vocab_handle)
174 self.decoder = {v: k for k, v in self.encoder.items()}
TypeError: expected str, bytes or os.PathLike object, not NoneType
also weights are missing.
Thank you for your attention. Due to network issues, the weights were not uploaded successfully. We are re-uploading them and expect to complete the upload today.
I don't have a file to calibrate it with, so no imatrix, but here's the first GGUF of this model, now that it's uploaded. https://huggingface.co/DeProgrammer/Light-IF-32B-Q4_K_M-GGUF