malt666 commited on
Commit
2b28cab
·
verified ·
1 Parent(s): 513a5d0

Upload 4 files

Browse files
Files changed (4) hide show
  1. Dockerfile +21 -0
  2. README.md +80 -10
  3. app.py +879 -0
  4. requirements.txt +0 -0
Dockerfile ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ WORKDIR /app
4
+
5
+ COPY requirements.txt .
6
+ RUN pip install --no-cache-dir -r requirements.txt
7
+
8
+ COPY . .
9
+
10
+ # 设置环境变量
11
+ ENV HOST=0.0.0.0
12
+ ENV PORT=7860
13
+
14
+ # 删除敏感文件
15
+ RUN rm -f config.json password.txt
16
+
17
+ # 暴露端口(Hugging Face默认使用7860端口)
18
+ EXPOSE 7860
19
+
20
+ # 启动命令
21
+ CMD ["python", "app.py"]
README.md CHANGED
@@ -1,10 +1,80 @@
1
- ---
2
- title: Abacus Chat Proxy
3
- emoji: 📈
4
- colorFrom: gray
5
- colorTo: blue
6
- sdk: docker
7
- pinned: false
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Abacus Chat Proxy
2
+
3
+ 一个用于中转API请求的代理服务器。
4
+
5
+ ## 🚀 快速开始
6
+
7
+ ### 本地运行
8
+
9
+ #### Windows用户
10
+
11
+ 1. 双击运行 `start.bat`
12
+ 2. 首次运行选择 `0` 进行配置
13
+ 3. 配置完成后选择 `Y` 直接启动,或 `N` 返回菜单
14
+ 4. 之后可直接选择 `1` 启动代理
15
+ 5. 代理服务器默认运行在 `http://127.0.0.1:9876/`
16
+
17
+ #### Linux/macOS用户
18
+
19
+ ```bash
20
+ # 赋予脚本执行权限
21
+ chmod +x start.sh
22
+
23
+ # 运行脚本
24
+ ./start.sh
25
+ ```
26
+
27
+ 选项说明同Windows。
28
+
29
+ ### 🌐 Hugging Face部署
30
+
31
+ 1. Fork本仓库到你的GitHub账号
32
+ 2. 在Hugging Face上创建新的Space(选择Docker类型)
33
+ 3. 在Space的设置中连接你的GitHub仓库
34
+ 4. 在Space的设置中添加以下Secrets:
35
+ - 第1组配置:
36
+ - `covid_1`: 第1个会话ID
37
+ - `cookie_1`: 第1个cookies字符串
38
+ - 第2组配置(如果需要):
39
+ - `covid_2`: 第2个会话ID
40
+ - `cookie_2`: 第2个cookies字符串
41
+ - 更多配置以此类推(`covid_3`/`cookie_3`...)
42
+ - `password`: (可选)访问密码
43
+ 5. Space会自动部署,服务将在 `https://你的空间名-你的用户名.hf.space` 上运行
44
+
45
+ ## ⚙️ 环境要求
46
+
47
+ - Python 3.8+
48
+ - pip
49
+
50
+ ## 📦 依赖
51
+
52
+ ```bash
53
+ Flask==3.1.0
54
+ requests==2.32.3
55
+ PyJWT==2.8.0
56
+ ```
57
+
58
+ ## 📝 配置说明
59
+
60
+ ### 本地配置
61
+
62
+ 首次运行时,请选择 `0` 进行配置,按照提示填写相关信息。配置文件将保存在 `config.json` 中。
63
+
64
+ ### 环境变量配置
65
+
66
+ 在Docker或云平台部署时,需要配置以下环境变量:
67
+
68
+ - 必需的配置(至少需要一组):
69
+ - `covid_1` + `cookie_1`: 第1组配置
70
+ - `covid_2` + `cookie_2`: 第2组配置(可选)
71
+ - 以此类推...
72
+ - 可选配置:
73
+ - `password`: 访问密码
74
+
75
+ ## 🔒 安全说明
76
+
77
+ - 配置文件中的敏感信息请妥善保管
78
+ - 建议在部署到Hugging Face时设置访问密码
79
+ - 不要将包含敏感信息的配置文件提交到公开仓库
80
+ - 在Hugging Face上配置时,请使用Secrets来存储敏感信息
app.py ADDED
@@ -0,0 +1,879 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from flask import Flask, request, jsonify, Response, render_template_string
2
+ import requests
3
+ import time
4
+ import json
5
+ import uuid
6
+ import random
7
+ import io
8
+ import re
9
+ from functools import wraps
10
+ import hashlib
11
+ import jwt
12
+ import os
13
+ import threading
14
+ from datetime import datetime
15
+
16
+ app = Flask(__name__)
17
+
18
+
19
+ API_ENDPOINT_URL = "https://abacus.ai/api/v0/describeDeployment"
20
+ MODEL_LIST_URL = "https://abacus.ai/api/v0/listExternalApplications"
21
+ CHAT_URL = "https://apps.abacus.ai/api/_chatLLMSendMessageSSE"
22
+ USER_INFO_URL = "https://abacus.ai/api/v0/_getUserInfo"
23
+
24
+
25
+ USER_AGENTS = [
26
+ "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36"
27
+ ]
28
+
29
+
30
+ PASSWORD = None
31
+ USER_NUM = 0
32
+ USER_DATA = []
33
+ CURRENT_USER = -1
34
+ MODELS = set()
35
+
36
+
37
+ TRACE_ID = "3042e28b3abf475d8d973c7e904935af"
38
+ SENTRY_TRACE = f"{TRACE_ID}-80d9d2538b2682d0"
39
+
40
+
41
+ # 添加一个计数器记录健康检查次数
42
+ health_check_counter = 0
43
+
44
+
45
+ # HTML模板
46
+ INDEX_HTML = """
47
+ <!DOCTYPE html>
48
+ <html lang="zh-CN">
49
+ <head>
50
+ <meta charset="UTF-8">
51
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
52
+ <title>Abacus Chat Proxy</title>
53
+ <style>
54
+ * {
55
+ margin: 0;
56
+ padding: 0;
57
+ box-sizing: border-box;
58
+ }
59
+ body {
60
+ font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
61
+ line-height: 1.6;
62
+ color: #333;
63
+ background: #f5f5f5;
64
+ min-height: 100vh;
65
+ display: flex;
66
+ flex-direction: column;
67
+ align-items: center;
68
+ padding: 2rem;
69
+ }
70
+ .container {
71
+ max-width: 800px;
72
+ width: 100%;
73
+ background: white;
74
+ padding: 2rem;
75
+ border-radius: 12px;
76
+ box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
77
+ }
78
+ h1 {
79
+ color: #2c3e50;
80
+ margin-bottom: 1rem;
81
+ text-align: center;
82
+ font-size: 2.5rem;
83
+ }
84
+ .status-card {
85
+ background: #f8f9fa;
86
+ border-radius: 8px;
87
+ padding: 1.5rem;
88
+ margin: 1.5rem 0;
89
+ }
90
+ .status-item {
91
+ display: flex;
92
+ justify-content: space-between;
93
+ align-items: center;
94
+ padding: 0.5rem 0;
95
+ border-bottom: 1px solid #dee2e6;
96
+ }
97
+ .status-item:last-child {
98
+ border-bottom: none;
99
+ }
100
+ .status-label {
101
+ color: #6c757d;
102
+ font-weight: 500;
103
+ }
104
+ .status-value {
105
+ color: #28a745;
106
+ font-weight: 600;
107
+ }
108
+ .status-value.warning {
109
+ color: #ffc107;
110
+ }
111
+ .footer {
112
+ margin-top: 2rem;
113
+ text-align: center;
114
+ color: #6c757d;
115
+ }
116
+ .models-list {
117
+ list-style: none;
118
+ display: flex;
119
+ flex-wrap: wrap;
120
+ gap: 0.5rem;
121
+ margin-top: 0.5rem;
122
+ }
123
+ .model-tag {
124
+ background: #e9ecef;
125
+ padding: 0.25rem 0.75rem;
126
+ border-radius: 16px;
127
+ font-size: 0.875rem;
128
+ color: #495057;
129
+ }
130
+ .endpoints {
131
+ margin-top: 2rem;
132
+ }
133
+ .endpoint-item {
134
+ background: #f8f9fa;
135
+ padding: 1rem;
136
+ border-radius: 8px;
137
+ margin-bottom: 1rem;
138
+ }
139
+ .endpoint-url {
140
+ font-family: monospace;
141
+ background: #e9ecef;
142
+ padding: 0.25rem 0.5rem;
143
+ border-radius: 4px;
144
+ }
145
+ @media (max-width: 768px) {
146
+ .container {
147
+ padding: 1rem;
148
+ }
149
+ h1 {
150
+ font-size: 2rem;
151
+ }
152
+ }
153
+ </style>
154
+ </head>
155
+ <body>
156
+ <div class="container">
157
+ <h1>🤖 Abacus Chat Proxy</h1>
158
+
159
+ <div class="status-card">
160
+ <div class="status-item">
161
+ <span class="status-label">服务状态</span>
162
+ <span class="status-value">运行中</span>
163
+ </div>
164
+ <div class="status-item">
165
+ <span class="status-label">运行时间</span>
166
+ <span class="status-value">{{ uptime }}</span>
167
+ </div>
168
+ <div class="status-item">
169
+ <span class="status-label">健康检查次数</span>
170
+ <span class="status-value">{{ health_checks }}</span>
171
+ </div>
172
+ <div class="status-item">
173
+ <span class="status-label">已配置用户数</span>
174
+ <span class="status-value">{{ user_count }}</span>
175
+ </div>
176
+ <div class="status-item">
177
+ <span class="status-label">可用模型</span>
178
+ <div class="models-list">
179
+ {% for model in models %}
180
+ <span class="model-tag">{{ model }}</span>
181
+ {% endfor %}
182
+ </div>
183
+ </div>
184
+ </div>
185
+
186
+ <div class="endpoints">
187
+ <h2>API端点</h2>
188
+ <div class="endpoint-item">
189
+ <p>获取模型列表:</p>
190
+ <code class="endpoint-url">GET /v1/models</code>
191
+ </div>
192
+ <div class="endpoint-item">
193
+ <p>聊天补全:</p>
194
+ <code class="endpoint-url">POST /v1/chat/completions</code>
195
+ </div>
196
+ <div class="endpoint-item">
197
+ <p>健康检查:</p>
198
+ <code class="endpoint-url">GET /health</code>
199
+ </div>
200
+ </div>
201
+
202
+ <div class="footer">
203
+ <p>© {{ year }} Abacus Chat Proxy. 保持简单,保持可靠。</p>
204
+ </div>
205
+ </div>
206
+ </body>
207
+ </html>
208
+ """
209
+
210
+ # 记录启动时间
211
+ START_TIME = datetime.now()
212
+
213
+
214
+ def resolve_config():
215
+ # 从环境变量读取多组配置
216
+ config_list = []
217
+ i = 1
218
+ while True:
219
+ covid = os.environ.get(f"covid_{i}")
220
+ cookie = os.environ.get(f"cookie_{i}")
221
+ if not (covid and cookie):
222
+ break
223
+ config_list.append({
224
+ "conversation_id": covid,
225
+ "cookies": cookie
226
+ })
227
+ i += 1
228
+
229
+ # 如果环境变量存在配置,使用环境变量的配置
230
+ if config_list:
231
+ return config_list
232
+
233
+ # 如果环境变量不存在,从文件读取
234
+ try:
235
+ with open("config.json", "r") as f:
236
+ config = json.load(f)
237
+ config_list = config.get("config")
238
+ return config_list
239
+ except FileNotFoundError:
240
+ print("未找到config.json文件")
241
+ return []
242
+ except json.JSONDecodeError:
243
+ print("config.json格式错误")
244
+ return []
245
+
246
+
247
+ def get_password():
248
+ global PASSWORD
249
+ # 从环境变量读取密码
250
+ env_password = os.environ.get("password")
251
+ if env_password:
252
+ PASSWORD = hashlib.sha256(env_password.encode()).hexdigest()
253
+ return
254
+
255
+ # 如果环境变量不存在,从文件读取
256
+ try:
257
+ with open("password.txt", "r") as f:
258
+ PASSWORD = f.read().strip()
259
+ except FileNotFoundError:
260
+ with open("password.txt", "w") as f:
261
+ PASSWORD = None
262
+
263
+
264
+ def require_auth(f):
265
+ @wraps(f)
266
+ def decorated(*args, **kwargs):
267
+ if not PASSWORD:
268
+ return f(*args, **kwargs)
269
+ auth = request.authorization
270
+ if not auth or not check_auth(auth.token):
271
+ return jsonify({"error": "Unauthorized access"}), 401
272
+ return f(*args, **kwargs)
273
+
274
+ return decorated
275
+
276
+
277
+ def check_auth(token):
278
+ return hashlib.sha256(token.encode()).hexdigest() == PASSWORD
279
+
280
+
281
+ def is_token_expired(token):
282
+ if not token:
283
+ return True
284
+
285
+ try:
286
+ # Malkodi tokenon sen validigo de subskribo
287
+ payload = jwt.decode(token, options={"verify_signature": False})
288
+ # Akiru eksvalidiĝan tempon, konsideru eksvalidiĝinta 5 minutojn antaŭe
289
+ return payload.get('exp', 0) - time.time() < 300
290
+ except:
291
+ return True
292
+
293
+
294
+ def refresh_token(session, cookies):
295
+ """Uzu kuketon por refreŝigi session token, nur revenigu novan tokenon"""
296
+ headers = {
297
+ "accept": "application/json, text/plain, */*",
298
+ "accept-language": "zh-CN,zh;q=0.9",
299
+ "content-type": "application/json",
300
+ "reai-ui": "1",
301
+ "sec-ch-ua": "\"Chromium\";v=\"116\", \"Not)A;Brand\";v=\"24\", \"Google Chrome\";v=\"116\"",
302
+ "sec-ch-ua-mobile": "?0",
303
+ "sec-ch-ua-platform": "\"Windows\"",
304
+ "sec-fetch-dest": "empty",
305
+ "sec-fetch-mode": "cors",
306
+ "sec-fetch-site": "same-site",
307
+ "x-abacus-org-host": "apps",
308
+ "user-agent": random.choice(USER_AGENTS),
309
+ "origin": "https://apps.abacus.ai",
310
+ "referer": "https://apps.abacus.ai/",
311
+ "cookie": cookies
312
+ }
313
+
314
+ try:
315
+ response = session.post(
316
+ USER_INFO_URL,
317
+ headers=headers,
318
+ json={},
319
+ cookies=None
320
+ )
321
+
322
+ if response.status_code == 200:
323
+ response_data = response.json()
324
+ if response_data.get('success') and 'sessionToken' in response_data.get('result', {}):
325
+ return response_data['result']['sessionToken']
326
+ else:
327
+ print(f"刷新token失败: {response_data.get('error', '未知错误')}")
328
+ return None
329
+ else:
330
+ print(f"刷新token失败,状态码: {response.status_code}")
331
+ return None
332
+ except Exception as e:
333
+ print(f"刷新token异常: {e}")
334
+ return None
335
+
336
+
337
+ def get_model_map(session, cookies, session_token):
338
+ """Akiru disponeblan modelan liston kaj ĝiajn mapajn rilatojn"""
339
+ headers = {
340
+ "accept": "application/json, text/plain, */*",
341
+ "accept-language": "zh-CN,zh;q=0.9",
342
+ "content-type": "application/json",
343
+ "reai-ui": "1",
344
+ "sec-ch-ua": "\"Chromium\";v=\"116\", \"Not)A;Brand\";v=\"24\", \"Google Chrome\";v=\"116\"",
345
+ "sec-ch-ua-mobile": "?0",
346
+ "sec-ch-ua-platform": "\"Windows\"",
347
+ "sec-fetch-dest": "empty",
348
+ "sec-fetch-mode": "cors",
349
+ "sec-fetch-site": "same-site",
350
+ "x-abacus-org-host": "apps",
351
+ "user-agent": random.choice(USER_AGENTS),
352
+ "origin": "https://apps.abacus.ai",
353
+ "referer": "https://apps.abacus.ai/",
354
+ "cookie": cookies
355
+ }
356
+
357
+ if session_token:
358
+ headers["session-token"] = session_token
359
+
360
+ model_map = {}
361
+ models_set = set()
362
+
363
+ try:
364
+ response = session.post(
365
+ MODEL_LIST_URL,
366
+ headers=headers,
367
+ json={},
368
+ cookies=None
369
+ )
370
+
371
+ if response.status_code != 200:
372
+ print(f"获取模型列表失败,状态码: {response.status_code}")
373
+ raise Exception("API请求失败")
374
+
375
+ data = response.json()
376
+ if not data.get('success'):
377
+ print(f"获取模型列表失败: {data.get('error', '未知错误')}")
378
+ raise Exception("API返回错误")
379
+
380
+ applications = []
381
+ if isinstance(data.get('result'), dict):
382
+ applications = data.get('result', {}).get('externalApplications', [])
383
+ elif isinstance(data.get('result'), list):
384
+ applications = data.get('result', [])
385
+
386
+ for app in applications:
387
+ app_name = app.get('name', '')
388
+ app_id = app.get('externalApplicationId', '')
389
+ prediction_overrides = app.get('predictionOverrides', {})
390
+ llm_name = prediction_overrides.get('llmName', '') if prediction_overrides else ''
391
+
392
+ if not (app_name and app_id and llm_name):
393
+ continue
394
+
395
+ model_name = app_name
396
+ model_map[model_name] = (app_id, llm_name)
397
+ models_set.add(model_name)
398
+
399
+ if not model_map:
400
+ raise Exception("未找到任何可用模型")
401
+
402
+ return model_map, models_set
403
+
404
+ except Exception as e:
405
+ print(f"获取模型列表异常: {e}")
406
+ raise
407
+
408
+
409
+ def init_session():
410
+ get_password()
411
+ global USER_NUM, MODELS, USER_DATA
412
+ config_list = resolve_config()
413
+ user_num = len(config_list)
414
+ all_models = set()
415
+
416
+ for i in range(user_num):
417
+ user = config_list[i]
418
+ cookies = user.get("cookies")
419
+ conversation_id = user.get("conversation_id")
420
+ session = requests.Session()
421
+
422
+ session_token = refresh_token(session, cookies)
423
+ if not session_token:
424
+ print(f"无法获取cookie {i+1}的token")
425
+ continue
426
+
427
+ try:
428
+ model_map, models_set = get_model_map(session, cookies, session_token)
429
+ all_models.update(models_set)
430
+ USER_DATA.append((session, cookies, session_token, conversation_id, model_map))
431
+ except Exception as e:
432
+ print(f"配置用户 {i+1} 失败: {e}")
433
+ continue
434
+
435
+ USER_NUM = len(USER_DATA)
436
+ if USER_NUM == 0:
437
+ print("No user available, exiting...")
438
+ exit(1)
439
+
440
+ MODELS = all_models
441
+ print(f"启动完成,共配置 {USER_NUM} 个用户")
442
+
443
+
444
+ def update_cookie(session, cookies):
445
+ cookie_jar = {}
446
+ for key, value in session.cookies.items():
447
+ cookie_jar[key] = value
448
+ cookie_dict = {}
449
+ for item in cookies.split(";"):
450
+ key, value = item.strip().split("=", 1)
451
+ cookie_dict[key] = value
452
+ cookie_dict.update(cookie_jar)
453
+ cookies = "; ".join([f"{key}={value}" for key, value in cookie_dict.items()])
454
+ return cookies
455
+
456
+
457
+ user_data = init_session()
458
+
459
+
460
+ @app.route("/v1/models", methods=["GET"])
461
+ @require_auth
462
+ def get_models():
463
+ if len(MODELS) == 0:
464
+ return jsonify({"error": "No models available"}), 500
465
+ model_list = []
466
+ for model in MODELS:
467
+ model_list.append(
468
+ {
469
+ "id": model,
470
+ "object": "model",
471
+ "created": int(time.time()),
472
+ "owned_by": "Elbert",
473
+ "name": model,
474
+ }
475
+ )
476
+ return jsonify({"object": "list", "data": model_list})
477
+
478
+
479
+ @app.route("/v1/chat/completions", methods=["POST"])
480
+ @require_auth
481
+ def chat_completions():
482
+ openai_request = request.get_json()
483
+ stream = openai_request.get("stream", False)
484
+ messages = openai_request.get("messages")
485
+ if messages is None:
486
+ return jsonify({"error": "Messages is required", "status": 400}), 400
487
+ model = openai_request.get("model")
488
+ if model not in MODELS:
489
+ return (
490
+ jsonify(
491
+ {
492
+ "error": "Model not available, check if it is configured properly",
493
+ "status": 404,
494
+ }
495
+ ),
496
+ 404,
497
+ )
498
+ message = format_message(messages)
499
+ think = (
500
+ openai_request.get("think", False) if model == "Claude Sonnet 3.7" else False
501
+ )
502
+ return (
503
+ send_message(message, model, think)
504
+ if stream
505
+ else send_message_non_stream(message, model, think)
506
+ )
507
+
508
+
509
+ def get_user_data():
510
+ global CURRENT_USER
511
+ CURRENT_USER = (CURRENT_USER + 1) % USER_NUM
512
+ print(f"使用配置 {CURRENT_USER+1}")
513
+
514
+ # Akiru uzantajn datumojn
515
+ session, cookies, session_token, conversation_id, model_map = USER_DATA[CURRENT_USER]
516
+
517
+ # Kontrolu ĉu la tokeno eksvalidiĝis, se jes, refreŝigu ĝin
518
+ if is_token_expired(session_token):
519
+ print(f"Cookie {CURRENT_USER+1}的token已过期或即将过期,正在刷新...")
520
+ new_token = refresh_token(session, cookies)
521
+ if new_token:
522
+ # Ĝisdatigu la globale konservitan tokenon
523
+ USER_DATA[CURRENT_USER] = (session, cookies, new_token, conversation_id, model_map)
524
+ session_token = new_token
525
+ print(f"成功更新token: {session_token[:15]}...{session_token[-15:]}")
526
+ else:
527
+ print(f"警告:无法刷新Cookie {CURRENT_USER+1}的token,继续使用当前token")
528
+
529
+ return (session, cookies, session_token, conversation_id, model_map)
530
+
531
+
532
+ def generate_trace_id():
533
+ """Generu novan trace_id kaj sentry_trace"""
534
+ trace_id = str(uuid.uuid4()).replace('-', '')
535
+ sentry_trace = f"{trace_id}-{str(uuid.uuid4())[:16]}"
536
+ return trace_id, sentry_trace
537
+
538
+
539
+ def send_message(message, model, think=False):
540
+ """Flua traktado kaj plusendo de mesaĝoj"""
541
+ (session, cookies, session_token, conversation_id, model_map) = get_user_data()
542
+ trace_id, sentry_trace = generate_trace_id()
543
+
544
+ headers = {
545
+ "accept": "text/event-stream",
546
+ "accept-language": "zh-CN,zh;q=0.9",
547
+ "baggage": f"sentry-environment=production,sentry-release=975eec6685013679c139fc88db2c48e123d5c604,sentry-public_key=3476ea6df1585dd10e92cdae3a66ff49,sentry-trace_id={trace_id}",
548
+ "content-type": "text/plain;charset=UTF-8",
549
+ "cookie": cookies,
550
+ "sec-ch-ua": "\"Chromium\";v=\"116\", \"Not)A;Brand\";v=\"24\", \"Google Chrome\";v=\"116\"",
551
+ "sec-ch-ua-mobile": "?0",
552
+ "sec-ch-ua-platform": "\"Windows\"",
553
+ "sec-fetch-dest": "empty",
554
+ "sec-fetch-mode": "cors",
555
+ "sec-fetch-site": "same-origin",
556
+ "sentry-trace": sentry_trace,
557
+ "user-agent": random.choice(USER_AGENTS)
558
+ }
559
+
560
+ if session_token:
561
+ headers["session-token"] = session_token
562
+
563
+ payload = {
564
+ "requestId": str(uuid.uuid4()),
565
+ "deploymentConversationId": conversation_id,
566
+ "message": message,
567
+ "isDesktop": False,
568
+ "chatConfig": {
569
+ "timezone": "Asia/Shanghai",
570
+ "language": "zh-CN"
571
+ },
572
+ "llmName": model_map[model][1],
573
+ "externalApplicationId": model_map[model][0],
574
+ "regenerate": True,
575
+ "editPrompt": True
576
+ }
577
+
578
+ if think:
579
+ payload["useThinking"] = think
580
+
581
+ try:
582
+ response = session.post(
583
+ CHAT_URL,
584
+ headers=headers,
585
+ data=json.dumps(payload),
586
+ stream=True
587
+ )
588
+
589
+ response.raise_for_status()
590
+
591
+ def extract_segment(line_data):
592
+ try:
593
+ data = json.loads(line_data)
594
+ if "segment" in data:
595
+ if isinstance(data["segment"], str):
596
+ return data["segment"]
597
+ elif isinstance(data["segment"], dict) and "segment" in data["segment"]:
598
+ return data["segment"]["segment"]
599
+ return ""
600
+ except:
601
+ return ""
602
+
603
+ def generate():
604
+ id = ""
605
+ think_state = 2
606
+
607
+ yield "data: " + json.dumps({"object": "chat.completion.chunk", "choices": [{"delta": {"role": "assistant"}}]}) + "\n\n"
608
+
609
+ for line in response.iter_lines():
610
+ if line:
611
+ decoded_line = line.decode("utf-8")
612
+ try:
613
+ if think:
614
+ data = json.loads(decoded_line)
615
+ if data.get("type") != "text":
616
+ continue
617
+ elif think_state == 2:
618
+ id = data.get("messageId")
619
+ segment = "<think>\n" + data.get("segment", "")
620
+ yield f"data: {json.dumps({'object': 'chat.completion.chunk', 'choices': [{'delta': {'content': segment}}]})}\n\n"
621
+ think_state = 1
622
+ elif think_state == 1:
623
+ if data.get("messageId") != id:
624
+ segment = data.get("segment", "")
625
+ yield f"data: {json.dumps({'object': 'chat.completion.chunk', 'choices': [{'delta': {'content': segment}}]})}\n\n"
626
+ else:
627
+ segment = "\n</think>\n" + data.get("segment", "")
628
+ yield f"data: {json.dumps({'object': 'chat.completion.chunk', 'choices': [{'delta': {'content': segment}}]})}\n\n"
629
+ think_state = 0
630
+ else:
631
+ segment = data.get("segment", "")
632
+ yield f"data: {json.dumps({'object': 'chat.completion.chunk', 'choices': [{'delta': {'content': segment}}]})}\n\n"
633
+ else:
634
+ segment = extract_segment(decoded_line)
635
+ if segment:
636
+ yield f"data: {json.dumps({'object': 'chat.completion.chunk', 'choices': [{'delta': {'content': segment}}]})}\n\n"
637
+ except Exception as e:
638
+ print(f"处理响应出错: {e}")
639
+
640
+ yield "data: " + json.dumps({"object": "chat.completion.chunk", "choices": [{"delta": {}, "finish_reason": "stop"}]}) + "\n\n"
641
+ yield "data: [DONE]\n\n"
642
+
643
+ return Response(generate(), mimetype="text/event-stream")
644
+ except requests.exceptions.RequestException as e:
645
+ error_details = str(e)
646
+ if hasattr(e, 'response') and e.response is not None:
647
+ if hasattr(e.response, 'text'):
648
+ error_details += f" - Response: {e.response.text[:200]}"
649
+ print(f"发送消息失败: {error_details}")
650
+ return jsonify({"error": f"Failed to send message: {error_details}"}), 500
651
+
652
+
653
+ def send_message_non_stream(message, model, think=False):
654
+ """Ne-flua traktado de mesaĝoj"""
655
+ (session, cookies, session_token, conversation_id, model_map) = get_user_data()
656
+ trace_id, sentry_trace = generate_trace_id()
657
+
658
+ headers = {
659
+ "accept": "text/event-stream",
660
+ "accept-language": "zh-CN,zh;q=0.9",
661
+ "baggage": f"sentry-environment=production,sentry-release=975eec6685013679c139fc88db2c48e123d5c604,sentry-public_key=3476ea6df1585dd10e92cdae3a66ff49,sentry-trace_id={trace_id}",
662
+ "content-type": "text/plain;charset=UTF-8",
663
+ "cookie": cookies,
664
+ "sec-ch-ua": "\"Chromium\";v=\"116\", \"Not)A;Brand\";v=\"24\", \"Google Chrome\";v=\"116\"",
665
+ "sec-ch-ua-mobile": "?0",
666
+ "sec-ch-ua-platform": "\"Windows\"",
667
+ "sec-fetch-dest": "empty",
668
+ "sec-fetch-mode": "cors",
669
+ "sec-fetch-site": "same-origin",
670
+ "sentry-trace": sentry_trace,
671
+ "user-agent": random.choice(USER_AGENTS)
672
+ }
673
+
674
+ if session_token:
675
+ headers["session-token"] = session_token
676
+
677
+ payload = {
678
+ "requestId": str(uuid.uuid4()),
679
+ "deploymentConversationId": conversation_id,
680
+ "message": message,
681
+ "isDesktop": False,
682
+ "chatConfig": {
683
+ "timezone": "Asia/Shanghai",
684
+ "language": "zh-CN"
685
+ },
686
+ "llmName": model_map[model][1],
687
+ "externalApplicationId": model_map[model][0],
688
+ "regenerate": True,
689
+ "editPrompt": True
690
+ }
691
+
692
+ if think:
693
+ payload["useThinking"] = think
694
+
695
+ try:
696
+ response = session.post(
697
+ CHAT_URL,
698
+ headers=headers,
699
+ data=json.dumps(payload),
700
+ stream=True
701
+ )
702
+
703
+ response.raise_for_status()
704
+ buffer = io.StringIO()
705
+
706
+ def extract_segment(line_data):
707
+ try:
708
+ data = json.loads(line_data)
709
+ if "segment" in data:
710
+ if isinstance(data["segment"], str):
711
+ return data["segment"]
712
+ elif isinstance(data["segment"], dict) and "segment" in data["segment"]:
713
+ return data["segment"]["segment"]
714
+ return ""
715
+ except:
716
+ return ""
717
+
718
+ if think:
719
+ id = ""
720
+ think_state = 2
721
+ for line in response.iter_lines():
722
+ if line:
723
+ decoded_line = line.decode("utf-8")
724
+ try:
725
+ data = json.loads(decoded_line)
726
+ if data.get("type") != "text":
727
+ continue
728
+ elif think_state == 2:
729
+ id = data.get("messageId")
730
+ segment = "<think>\n" + data.get("segment", "")
731
+ buffer.write(segment)
732
+ think_state = 1
733
+ elif think_state == 1:
734
+ if data.get("messageId") != id:
735
+ segment = data.get("segment", "")
736
+ buffer.write(segment)
737
+ else:
738
+ segment = "\n</think>\n" + data.get("segment", "")
739
+ buffer.write(segment)
740
+ think_state = 0
741
+ else:
742
+ segment = data.get("segment", "")
743
+ buffer.write(segment)
744
+ except json.JSONDecodeError as e:
745
+ print(f"解析响应出错: {e}")
746
+ else:
747
+ for line in response.iter_lines():
748
+ if line:
749
+ decoded_line = line.decode("utf-8")
750
+ try:
751
+ segment = extract_segment(decoded_line)
752
+ if segment:
753
+ buffer.write(segment)
754
+ except Exception as e:
755
+ print(f"处理响应出错: {e}")
756
+
757
+ openai_response = {
758
+ "id": "chatcmpl-" + str(uuid.uuid4()),
759
+ "object": "chat.completion",
760
+ "created": int(time.time()),
761
+ "model": model,
762
+ "choices": [
763
+ {
764
+ "index": 0,
765
+ "message": {"role": "assistant", "content": buffer.getvalue()},
766
+ "finish_reason": "completed",
767
+ }
768
+ ],
769
+ }
770
+ return jsonify(openai_response)
771
+ except Exception as e:
772
+ error_details = str(e)
773
+ if isinstance(e, requests.exceptions.RequestException) and e.response is not None:
774
+ error_details += f" - Response: {e.response.text[:200]}"
775
+ print(f"发送消息失败: {error_details}")
776
+ return jsonify({"error": f"Failed to send message: {error_details}"}), 500
777
+
778
+
779
+ def format_message(messages):
780
+ buffer = io.StringIO()
781
+ role_map, prefix, messages = extract_role(messages)
782
+ for message in messages:
783
+ role = message.get("role")
784
+ role = "\b" + role_map[role] if prefix else role_map[role]
785
+ content = message.get("content").replace("\\n", "\n")
786
+ pattern = re.compile(r"<\|removeRole\|>\n")
787
+ if pattern.match(content):
788
+ content = pattern.sub("", content)
789
+ buffer.write(f"{content}\n")
790
+ else:
791
+ buffer.write(f"{role}: {content}\n\n")
792
+ formatted_message = buffer.getvalue()
793
+ with open("message_log.txt", "w", encoding="utf-8") as f:
794
+ f.write(formatted_message)
795
+ return formatted_message
796
+
797
+
798
+ def extract_role(messages):
799
+ role_map = {"user": "Human", "assistant": "Assistant", "system": "System"}
800
+ prefix = False
801
+ first_message = messages[0]["content"]
802
+ pattern = re.compile(
803
+ r"""
804
+ <roleInfo>\s*
805
+ user:\s*(?P<user>[^\n]*)\s*
806
+ assistant:\s*(?P<assistant>[^\n]*)\s*
807
+ system:\s*(?P<system>[^\n]*)\s*
808
+ prefix:\s*(?P<prefix>[^\n]*)\s*
809
+ </roleInfo>\n
810
+ """,
811
+ re.VERBOSE,
812
+ )
813
+ match = pattern.search(first_message)
814
+ if match:
815
+ role_map = {
816
+ "user": match.group("user"),
817
+ "assistant": match.group("assistant"),
818
+ "system": match.group("system"),
819
+ }
820
+ prefix = match.group("prefix") == "1"
821
+ messages[0]["content"] = pattern.sub("", first_message)
822
+ print(f"Extracted role map:")
823
+ print(
824
+ f"User: {role_map['user']}, Assistant: {role_map['assistant']}, System: {role_map['system']}"
825
+ )
826
+ print(f"Using prefix: {prefix}")
827
+ return (role_map, prefix, messages)
828
+
829
+
830
+ @app.route("/health", methods=["GET"])
831
+ def health_check():
832
+ global health_check_counter
833
+ health_check_counter += 1
834
+ return jsonify({
835
+ "status": "healthy",
836
+ "timestamp": datetime.now().isoformat(),
837
+ "checks": health_check_counter
838
+ })
839
+
840
+
841
+ def keep_alive():
842
+ """每20分钟进行一次自我健康检查"""
843
+ while True:
844
+ try:
845
+ requests.get("http://127.0.0.1:7860/health")
846
+ time.sleep(1200) # 20分钟
847
+ except:
848
+ pass # 忽略错误,保持运行
849
+
850
+
851
+ @app.route("/", methods=["GET"])
852
+ def index():
853
+ uptime = datetime.now() - START_TIME
854
+ days = uptime.days
855
+ hours, remainder = divmod(uptime.seconds, 3600)
856
+ minutes, seconds = divmod(remainder, 60)
857
+
858
+ if days > 0:
859
+ uptime_str = f"{days}天 {hours}小时 {minutes}分钟"
860
+ elif hours > 0:
861
+ uptime_str = f"{hours}小时 {minutes}分钟"
862
+ else:
863
+ uptime_str = f"{minutes}分钟 {seconds}秒"
864
+
865
+ return render_template_string(
866
+ INDEX_HTML,
867
+ uptime=uptime_str,
868
+ health_checks=health_check_counter,
869
+ user_count=USER_NUM,
870
+ models=sorted(list(MODELS)),
871
+ year=datetime.now().year
872
+ )
873
+
874
+
875
+ if __name__ == "__main__":
876
+ # 启动保活线程
877
+ threading.Thread(target=keep_alive, daemon=True).start()
878
+ port = int(os.environ.get("PORT", 9876))
879
+ app.run(port=port, host="0.0.0.0")
requirements.txt ADDED
Binary file (90 Bytes). View file