Slipstream-Max commited on
Commit
727f992
·
verified ·
1 Parent(s): fdc4c95

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +118 -0
README.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - CAS-SIAT-XinHai/CPsyCoun
5
+ language:
6
+ - zh
7
+ base_model:
8
+ - internlm/internlm2_5-7b-chat
9
+ tags:
10
+ - psychology
11
+ ---
12
+
13
+ # Model Details
14
+
15
+ ## Model Description
16
+
17
+ - **Developed by:** AITA
18
+ - **Model type:** Full-Precision Text Generation LLM (FP16 GGUF format)
19
+ - **Original Model:** https://huggingface.co/CAS-SIAT-XinHai/CPsyCounX
20
+ - **Precision:** FP16 (non-quantized full-precision version)
21
+
22
+ ## Repository
23
+
24
+ - **GGUF Converter:** [llama.cpp](https://github.com/ggerganov/llama.cpp)
25
+ - **Model Hub:** https://huggingface.co/Slipstream-Max/CPsyCounX-InternLM2-Chat-7B-GGUF-fp16
26
+
27
+
28
+ # Usage
29
+
30
+ ## Method 1: llama.cpp Backend Server + Chatbox
31
+
32
+ **Step 1: Start .[llama.cpp](https://github.com/ggml-org/llama.cpp) Server**
33
+ ```bash
34
+ ./llama-server \
35
+ -m /path/to/model.gguf \
36
+ -c 2048 \ # Context length
37
+ --host 0.0.0.0 \ # Allow remote connections
38
+ --port 8080 \ # Server port
39
+ --n-gpu-layers 35 # GPU acceleration (if available)
40
+ ```
41
+
42
+ **Step 2: Connect via Chatbox**
43
+ 1. Download [Chatbox](https://github.com/Bin-Huang/chatbox)
44
+ 2. Configure API endpoint:
45
+ ```
46
+ API URL: http://localhost:8080
47
+ Model: (leave empty)
48
+ API Type: llama.cpp
49
+ ```
50
+ 3. Set generation parameters:
51
+ ```json
52
+ {
53
+ "temperature": 0.7,
54
+ "max_tokens": 512,
55
+ "top_p": 0.9
56
+ }
57
+ ```
58
+
59
+ ## Method 2: LM Studio
60
+
61
+ 1. Download [LM Studio](https://lmstudio.ai/)
62
+ 2. Load GGUF file:
63
+ - Launch LM Studio
64
+ - Search Slipstream-Max/Emollm-InternLM2.5-7B-chat-GGUF-fp16
65
+ 3. Configure settings:
66
+ ```yaml
67
+ Context Length: 2048
68
+ GPU Offload: Recommended (enable if available)
69
+ Batch Size: 512
70
+ ```
71
+ 4. Start chatting through the built-in UI
72
+
73
+
74
+ # Precision Details
75
+
76
+ | Filename | Precision | Size | Characteristics |
77
+ |----------------|-----------|-----------|--------------------------------|
78
+ | CPsyCounX.gguf | FP16 | [15.5GB] | Full original model precision |
79
+
80
+
81
+ # Hardware Requirements
82
+
83
+ **Minimum:**
84
+ - 24GB RAM (for 7B model)
85
+ - CPU with AVX/AVX2 instruction set support
86
+
87
+ **Recommended:**
88
+ - 32GB RAM
89
+ - CUDA-capable GPU (for acceleration)
90
+ - Fast SSD storage (due to large model size)
91
+
92
+
93
+ # Key Notes
94
+
95
+ 1. Requires latest llama.cpp (v3+ recommended)
96
+ 2. Use `--n-gpu-layers 35` for GPU acceleration (requires CUDA-enabled build)
97
+ 3. Initial loading takes longer (2-5 minutes)
98
+ 4. Requires more memory/storage than quantized versions
99
+ 5. Use `--mlock` to prevent swapping
100
+
101
+
102
+ # Advantages
103
+
104
+ - Preserves original model precision
105
+ - Ideal for precision-sensitive applications
106
+ - No quantization loss
107
+ - Suitable for continued fine-tuning
108
+
109
+
110
+ # Ethical Considerations
111
+
112
+ All open-source code and models in this repository are licensed under the MIT License. As the currently open-sourced EmoLLM model may have certain limitations, we hereby state the following:
113
+
114
+ EmoLLM is currently only capable of providing emotional support and related advisory services, and cannot yet offer professional psychological counseling or psychotherapy services. EmoLLM is not a substitute for qualified mental health professionals or psychotherapists, and may exhibit inherent limitations while potentially generating erroneous, harmful, offensive, or otherwise undesirable outputs. In critical or high-risk scenarios, users must exercise prudence and refrain from treating EmoLLM's outputs as definitive decision-making references, to avoid personal harm, property loss, or other significant damages.
115
+
116
+ Under no circumstances shall the authors, contributors, or copyright holders be liable for any claims, damages, or other liabilities (whether in contract, tort, or otherwise) arising from the use of or transactions related to the EmoLLM software.
117
+
118
+ By using EmoLLM, you agree to the above terms and conditions, acknowledge awareness of its potential risks, and further agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities resulting from your use of EmoLLM.