File size: 4,221 Bytes
8846383
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6308a8f
8846383
6308a8f
8846383
 
 
 
 
254f34f
8846383
 
 
 
 
 
 
 
23325cf
 
8846383
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23325cf
8846383
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
140a823
8846383
 
 
140a823
8846383
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
---
license: mit
datasets:
- CAS-SIAT-XinHai/CPsyCoun
- scutcyr/SoulChatCorpus
language:
- zh
base_model:
- internlm/internlm2_5-7b-chat
tags:
- psychology
---

# Model Details

## Model Description

- **Developed by:** AITA
- **Model type:** Full-Precision Text Generation LLM (FP16 GGUF format)  
- **Original Model:** https://modelscope.cn/models/chg0901/EmoLLMV3.0/summary
- **Precision:** FP16 (non-quantized full-precision version)  

## Repository

- **GGUF Converter:** [llama.cpp](https://github.com/ggerganov/llama.cpp)  
- **Huggingface Hub:** https://huggingface.co/Slipstream-Max/Emollm-InternLM2.5-7B-chat-GGUF-fp16/


# Usage

## Method 1: llama.cpp Backend Server + Chatbox

**Step 1: Start .[llama.cpp](https://github.com/ggml-org/llama.cpp) Server**
```bash
./llama-server \
  -m /path/to/model.gguf \
  -c 2048 \          # Context length
  --host 0.0.0.0 \   # Allow remote connections
  --port 8080 \      # Server port
  --n-gpu-layers 35  # GPU acceleration (if available)
```

**Step 2: Connect via Chatbox**  
1. Download [Chatbox](https://github.com/Bin-Huang/chatbox)
2. Configure API endpoint:
   ```
   API URL: http://localhost:8080
   Model: (leave empty)
   API Type: llama.cpp
   ```
3. Set generation parameters:
   ```json
   {
     "temperature": 0.7,
     "max_tokens": 512,
     "top_p": 0.9
   }
   ```

## Method 2: LM Studio

1. Download [LM Studio](https://lmstudio.ai/)
2. Load GGUF file:
   - Launch LM Studio
   - Search Slipstream-Max/Emollm-InternLM2.5-7B-chat-GGUF-fp16
3. Configure settings:
   ```yaml
   Context Length: 2048
   GPU Offload: Recommended (enable if available)
   Batch Size: 512
   ```
4. Start chatting through the built-in UI


# Precision Details

| Filename       | Precision | Size      | Characteristics               |
|----------------|-----------|-----------|--------------------------------|
| emollmv3.gguf | FP16      | [15.5GB] | Full original model precision |


# Hardware Requirements

**Minimum:**  
- 24GB RAM (for 7B model)  
- CPU with AVX/AVX2 instruction set support  

**Recommended:**  
- 32GB RAM  
- CUDA-capable GPU (for acceleration)  
- Fast SSD storage (due to large model size)  


# Key Notes

1. Requires latest llama.cpp (v3+ recommended)
2. Use `--n-gpu-layers 35` for GPU acceleration (requires CUDA-enabled build)
3. Initial loading takes longer (2-5 minutes)
4. Requires more memory/storage than quantized versions
5. Use `--mlock` to prevent swapping


# Advantages

- Preserves original model precision
- Ideal for precision-sensitive applications
- No quantization loss
- Suitable for continued fine-tuning


# Ethical Considerations

All open-source code and models in this repository are licensed under the MIT License. As the currently open-sourced EmoLLM model may have certain limitations, we hereby state the following:

EmoLLM is currently only capable of providing emotional support and related advisory services, and cannot yet offer professional psychological counseling or psychotherapy services. EmoLLM is not a substitute for qualified mental health professionals or psychotherapists, and may exhibit inherent limitations while potentially generating erroneous, harmful, offensive, or otherwise undesirable outputs. In critical or high-risk scenarios, users must exercise prudence and refrain from treating EmoLLM's outputs as definitive decision-making references, to avoid personal harm, property loss, or other significant damages.

Under no circumstances shall the authors, contributors, or copyright holders be liable for any claims, damages, or other liabilities (whether in contract, tort, or otherwise) arising from the use of or transactions related to the EmoLLM software.

By using EmoLLM, you agree to the above terms and conditions, acknowledge awareness of its potential risks, and further agree to indemnify and hold harmless the authors, contributors, and copyright holders from any claims, damages, or liabilities resulting from your use of EmoLLM.


# Citation

```bibtex
@misc{2024EmoLLM,
    title={EmoLLM: Reinventing Mental Health Support with Large Language Models},
    author={EmoLLM Team},
    howpublished={\url{https://github.com/SmartFlowAI/EmoLLM}},
    year={2024}
}
```