Slipstream-Max commited on
Commit
6e08a0a
·
verified ·
1 Parent(s): 727f992

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -12
README.md CHANGED
@@ -22,7 +22,7 @@ tags:
22
  ## Repository
23
 
24
  - **GGUF Converter:** [llama.cpp](https://github.com/ggerganov/llama.cpp)
25
- - **Model Hub:** https://huggingface.co/Slipstream-Max/CPsyCounX-InternLM2-Chat-7B-GGUF-fp16
26
 
27
 
28
  # Usage
@@ -105,14 +105,3 @@ tags:
105
  - Ideal for precision-sensitive applications
106
  - No quantization loss
107
  - Suitable for continued fine-tuning
108
-
109
-
110
- # Ethical Considerations
111
-
112
- All open-source code and models in this repository are licensed under the MIT License. As the currently open-sourced EmoLLM model may have certain limitations, we hereby state the following:
113
-
114
- EmoLLM is currently only capable of providing emotional support and related advisory services, and cannot yet offer professional psychological counseling or psychotherapy services. EmoLLM is not a substitute for qualified mental health professionals or psychotherapists, and may exhibit inherent limitations while potentially generating erroneous, harmful, offensive, or otherwise undesirable outputs. In critical or high-risk scenarios, users must exercise prudence and refrain from treating EmoLLM's outputs as definitive decision-making references, to avoid personal harm, property loss, or other significant damages.
115
-
116
- Under no circumstances shall the authors, contributors, or copyright holders be liable for any claims, damages, or other liabilities (whether in contract, tort, or otherwise) arising from the use of or transactions related to the EmoLLM software.
117
-
118
- By using EmoLLM, you agree to the above terms and conditions, acknowledge awareness of its potential risks, and further agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities resulting from your use of EmoLLM.
 
22
  ## Repository
23
 
24
  - **GGUF Converter:** [llama.cpp](https://github.com/ggerganov/llama.cpp)
25
+ - **Huggingface Hub:** https://huggingface.co/Slipstream-Max/CPsyCounX-InternLM2-Chat-7B-GGUF-fp16
26
 
27
 
28
  # Usage
 
105
  - Ideal for precision-sensitive applications
106
  - No quantization loss
107
  - Suitable for continued fine-tuning