Triangle104 commited on
Commit
2c6036f
·
verified ·
1 Parent(s): 4803bce

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +3 -9
README.md CHANGED
@@ -1,20 +1,14 @@
1
  ---
2
- base_model: huihui-ai/Qwen3-1.7B-abliterated
3
  library_name: transformers
4
- license: apache-2.0
5
- license_link: https://huggingface.co/Qwen/Qwen3-1.7B/blob/main/LICENSE
6
- pipeline_tag: text-generation
7
  tags:
8
- - chat
9
- - abliterated
10
- - uncensored
11
  - llama-cpp
12
  - gguf-my-repo
13
  ---
14
 
15
  # Triangle104/Qwen3-1.7B-abliterated-Q5_K_S-GGUF
16
- This model was converted to GGUF format from [`huihui-ai/Qwen3-1.7B-abliterated`](https://huggingface.co/huihui-ai/Qwen3-1.7B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
17
- Refer to the [original model card](https://huggingface.co/huihui-ai/Qwen3-1.7B-abliterated) for more details on the model.
18
 
19
  ## Use with llama.cpp
20
  Install llama.cpp through brew (works on Mac and Linux)
 
1
  ---
2
+ base_model: mlabonne/Qwen3-1.7B-abliterated
3
  library_name: transformers
 
 
 
4
  tags:
 
 
 
5
  - llama-cpp
6
  - gguf-my-repo
7
  ---
8
 
9
  # Triangle104/Qwen3-1.7B-abliterated-Q5_K_S-GGUF
10
+ This model was converted to GGUF format from [`mlabonne/Qwen3-1.7B-abliterated`](https://huggingface.co/mlabonne/Qwen3-1.7B-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
11
+ Refer to the [original model card](https://huggingface.co/mlabonne/Qwen3-1.7B-abliterated) for more details on the model.
12
 
13
  ## Use with llama.cpp
14
  Install llama.cpp through brew (works on Mac and Linux)