huihui-ai lbourdois commited on
Commit
da2501f
·
verified ·
1 Parent(s): cce6d6e

Improve language tag (#2)

Browse files

- Improve language tag (38291375b62c3cdc0bd2bf54ac4e4706a3f11cf1)


Co-authored-by: Loïck BOURDOIS <[email protected]>

Files changed (1) hide show
  1. README.md +117 -105
README.md CHANGED
@@ -1,105 +1,117 @@
1
- ---
2
- library_name: transformers
3
- license: apache-2.0
4
- license_link: https://huggingface.co/huihui-ai/Qwen2.5-3B-Instruct-abliterated/blob/main/LICENSE
5
- language:
6
- - en
7
- pipeline_tag: text-generation
8
- base_model: Qwen/Qwen2.5-3B-Instruct
9
- tags:
10
- - chat
11
- - abliterated
12
- - uncensored
13
- ---
14
-
15
- # huihui-ai/Qwen2.5-3B-Instruct-abliterated
16
-
17
-
18
- This is an uncensored version of [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
19
-
20
- Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
21
-
22
- ## ollama
23
-
24
- You can use [huihui_ai/qwen2.5-abliterate:3b](https://ollama.com/huihui_ai/qwen2.5-abliterate:3b) directly,
25
- ```
26
- ollama run huihui_ai/qwen2.5-abliterate:3b
27
- ```
28
-
29
-
30
- ## Usage
31
- You can use this model in your applications by loading it with Hugging Face's `transformers` library:
32
-
33
-
34
- ```python
35
- from transformers import AutoModelForCausalLM, AutoTokenizer
36
-
37
- # Load the model and tokenizer
38
- model_name = "huihui-ai/Qwen2.5-3B-Instruct-abliterated"
39
- model = AutoModelForCausalLM.from_pretrained(
40
- model_name,
41
- torch_dtype="auto",
42
- device_map="auto"
43
- )
44
- tokenizer = AutoTokenizer.from_pretrained(model_name)
45
-
46
- # Initialize conversation context
47
- initial_messages = [
48
- {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
49
- ]
50
- messages = initial_messages.copy() # Copy the initial conversation context
51
-
52
- # Enter conversation loop
53
- while True:
54
- # Get user input
55
- user_input = input("User: ").strip() # Strip leading and trailing spaces
56
-
57
- # If the user types '/exit', end the conversation
58
- if user_input.lower() == "/exit":
59
- print("Exiting chat.")
60
- break
61
-
62
- # If the user types '/clean', reset the conversation context
63
- if user_input.lower() == "/clean":
64
- messages = initial_messages.copy() # Reset conversation context
65
- print("Chat history cleared. Starting a new conversation.")
66
- continue
67
-
68
- # If input is empty, prompt the user and continue
69
- if not user_input:
70
- print("Input cannot be empty. Please enter something.")
71
- continue
72
-
73
- # Add user input to the conversation
74
- messages.append({"role": "user", "content": user_input})
75
-
76
- # Build the chat template
77
- text = tokenizer.apply_chat_template(
78
- messages,
79
- tokenize=False,
80
- add_generation_prompt=True
81
- )
82
-
83
- # Tokenize input and prepare it for the model
84
- model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
85
-
86
- # Generate a response from the model
87
- generated_ids = model.generate(
88
- **model_inputs,
89
- max_new_tokens=8192
90
- )
91
-
92
- # Extract model output, removing special tokens
93
- generated_ids = [
94
- output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
95
- ]
96
- response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
97
-
98
- # Add the model's response to the conversation
99
- messages.append({"role": "assistant", "content": response})
100
-
101
- # Print the model's response
102
- print(f"Qwen: {response}")
103
-
104
- ```
105
-
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ license_link: https://huggingface.co/huihui-ai/Qwen2.5-3B-Instruct-abliterated/blob/main/LICENSE
5
+ language:
6
+ - zho
7
+ - eng
8
+ - fra
9
+ - spa
10
+ - por
11
+ - deu
12
+ - ita
13
+ - rus
14
+ - jpn
15
+ - kor
16
+ - vie
17
+ - tha
18
+ - ara
19
+ pipeline_tag: text-generation
20
+ base_model: Qwen/Qwen2.5-3B-Instruct
21
+ tags:
22
+ - chat
23
+ - abliterated
24
+ - uncensored
25
+ ---
26
+
27
+ # huihui-ai/Qwen2.5-3B-Instruct-abliterated
28
+
29
+
30
+ This is an uncensored version of [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) created with abliteration (see [this article](https://huggingface.co/blog/mlabonne/abliteration) to know more about it).
31
+
32
+ Special thanks to [@FailSpy](https://huggingface.co/failspy) for the original code and technique. Please follow him if you're interested in abliterated models.
33
+
34
+ ## ollama
35
+
36
+ You can use [huihui_ai/qwen2.5-abliterate:3b](https://ollama.com/huihui_ai/qwen2.5-abliterate:3b) directly,
37
+ ```
38
+ ollama run huihui_ai/qwen2.5-abliterate:3b
39
+ ```
40
+
41
+
42
+ ## Usage
43
+ You can use this model in your applications by loading it with Hugging Face's `transformers` library:
44
+
45
+
46
+ ```python
47
+ from transformers import AutoModelForCausalLM, AutoTokenizer
48
+
49
+ # Load the model and tokenizer
50
+ model_name = "huihui-ai/Qwen2.5-3B-Instruct-abliterated"
51
+ model = AutoModelForCausalLM.from_pretrained(
52
+ model_name,
53
+ torch_dtype="auto",
54
+ device_map="auto"
55
+ )
56
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
57
+
58
+ # Initialize conversation context
59
+ initial_messages = [
60
+ {"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."}
61
+ ]
62
+ messages = initial_messages.copy() # Copy the initial conversation context
63
+
64
+ # Enter conversation loop
65
+ while True:
66
+ # Get user input
67
+ user_input = input("User: ").strip() # Strip leading and trailing spaces
68
+
69
+ # If the user types '/exit', end the conversation
70
+ if user_input.lower() == "/exit":
71
+ print("Exiting chat.")
72
+ break
73
+
74
+ # If the user types '/clean', reset the conversation context
75
+ if user_input.lower() == "/clean":
76
+ messages = initial_messages.copy() # Reset conversation context
77
+ print("Chat history cleared. Starting a new conversation.")
78
+ continue
79
+
80
+ # If input is empty, prompt the user and continue
81
+ if not user_input:
82
+ print("Input cannot be empty. Please enter something.")
83
+ continue
84
+
85
+ # Add user input to the conversation
86
+ messages.append({"role": "user", "content": user_input})
87
+
88
+ # Build the chat template
89
+ text = tokenizer.apply_chat_template(
90
+ messages,
91
+ tokenize=False,
92
+ add_generation_prompt=True
93
+ )
94
+
95
+ # Tokenize input and prepare it for the model
96
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
97
+
98
+ # Generate a response from the model
99
+ generated_ids = model.generate(
100
+ **model_inputs,
101
+ max_new_tokens=8192
102
+ )
103
+
104
+ # Extract model output, removing special tokens
105
+ generated_ids = [
106
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
107
+ ]
108
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
109
+
110
+ # Add the model's response to the conversation
111
+ messages.append({"role": "assistant", "content": response})
112
+
113
+ # Print the model's response
114
+ print(f"Qwen: {response}")
115
+
116
+ ```
117
+