31 Commits

Author SHA1 Message Date
Clivia
a26235aba2 feat gpts的部署说明 2024-03-16 10:29:10 +08:00
Clivia
c6c3a6d93a Update ninja-image.yml 2024-03-08 21:43:00 +08:00
Clivia
00fbda47c7 feat flask直接启动,解决部分机cpu占用过大问题 2024-03-08 21:32:27 +08:00
Clivia
8b9c44b285 feat simply flask run 2024-03-08 14:30:48 +08:00
Clivia
27b9591723 feat simply flask run 2024-03-08 14:30:01 +08:00
Yanyutin753
e7f97f4104 支持gpt-4-gizmo-XXX,填错自动回退gpt-3.5-turbo 2024-03-04 15:08:26 +08:00
Clivia
36bd2901fc 优化部署说明 2024-03-04 09:42:07 +08:00
Clivia
c10d11e6d2 修改部署说明 2024-03-03 23:46:20 +08:00
Clivia
7bd55e3468 修改部署说明 2024-02-27 22:54:02 +08:00
Clivia
5a40b35df1 更新部署说明 2024-02-27 22:53:11 +08:00
Clivia
509b233694 修改部署说明 2024-02-27 22:46:55 +08:00
Clivia
129bbafe08 fix gpts.json is not accessible 2024-02-27 17:29:10 +08:00
Clivia
0214512b13 优化部署说明 2024-02-26 15:38:13 +08:00
Clivia
46d2b9cb43 适配GPTs配置 2024-02-26 14:44:33 +08:00
Clivia
e3d8cc2139 适配GPTs 2024-02-26 14:43:41 +08:00
Clivia
1fc6fa7784 支持gpt-4-gizmo-XXX,动态配置GPTS 2024-02-26 14:14:13 +08:00
Clivia
aae4fd64d7 动态适配gpts 2024-02-26 14:08:07 +08:00
Clivia
cd983f0a0c Update main.py 2024-02-26 14:04:04 +08:00
Clivia
76993fcce8 Rename docker-deploy.yml to ninja-image.yml 2024-02-20 01:28:26 +08:00
Clivia
fa645a80d8 Update docker-deploy.yml 2024-02-20 01:27:59 +08:00
Clivia
1e3e233adc 更新KEY_FOR_GPTS_INFO 2024-02-20 01:14:46 +08:00
Clivia
002ff558b0 优化gpts结构 2024-02-19 21:52:50 +08:00
Clivia
6f66431bb5 Update docker-compose.yml 2024-02-17 17:45:57 +08:00
Clivia
d815bf991e Update docker-compose.yml 2024-02-17 17:44:06 +08:00
Clivia
10488aeaa5 修正自定义更新access_token异常 2024-02-17 02:09:20 +08:00
Clivia
97f1c4f45f Update main.py 2024-02-14 21:57:14 +08:00
Clivia
0d0ae4a95a 适配ninja 2024-02-14 21:40:00 +08:00
Clivia
37b0dd7c36 适配ninja 2024-02-14 21:35:48 +08:00
Clivia
4a852bd070 适配ninja 2024-02-14 21:31:50 +08:00
Clivia
0ca230a853 适配ninja 2024-02-14 21:17:40 +08:00
Clivia
6eeadb49ac 修改适配ninja 2024-02-14 21:10:59 +08:00
10 changed files with 80 additions and 158 deletions

View File

@@ -1,47 +0,0 @@
name: xyhelper Build and Push Docker Image
on:
release:
types: [created]
workflow_dispatch:
inputs:
tag:
description: 'Tag Name'
required: true
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
- name: Set tag name
id: tag_name
run: |
if [ "${{ github.event_name }}" = "release" ]; then
echo "::set-output name=tag::${GITHUB_REF#refs/tags/}"
elif [ "${{ github.event_name }}" = "workflow_dispatch" ]; then
echo "::set-output name=tag::${{ github.event.inputs.tag }}"
fi
- name: Build and push Docker image with Release tag
uses: docker/build-push-action@v2
with:
context: .
file: ./Dockerfile
push: true
tags: |
yangclivia/pandora-to-api:${{ steps.tag_name.outputs.tag }}
yangclivia/pandora-to-api:0.7.8
platforms: linux/amd64,linux/arm64
build-args: TARGETPLATFORM=${{ matrix.platform }}

5
.gitignore vendored
View File

@@ -1,5 +0,0 @@
*.json
*.log
*.json
*.log

6
.idea/encodings.xml generated
View File

@@ -1,6 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="Encoding">
<file url="file://$PROJECT_DIR$/log/access.log" charset="GBK" />
</component>
</project>

View File

@@ -1,29 +1,20 @@
## 0.7.8 xyhelper项目简介
![Docker Image Size (tag)](https://img.shields.io/docker/image-size/yangclivia/pandora-to-api/0.7.8)![Docker Pulls](https://img.shields.io/docker/pulls/yangclivia/pandora-to-api)[![GitHub Repo stars](https://img.shields.io/github/stars/Yanyutin753/RefreshToV1Api?style=social)](https://github.com/Yanyutin753/refresh-gpt-chat/stargazers)
## 0.7.7 ninja版本项目简介
![Docker Image Size (tag)](https://img.shields.io/docker/image-size/yangclivia/pandora-to-api/0.7.7)![Docker Pulls](https://img.shields.io/docker/pulls/yangclivia/pandora-to-api)[![GitHub Repo stars](https://img.shields.io/github/stars/Yanyutin753/RefreshToV1Api?style=social)](https://github.com/Yanyutin753/refresh-gpt-chat/stargazers)
> [!IMPORTANT]
>
> Respect `xyhelper` ,Respect `ninja` , Respect `Wizerd`
> 开源不易,请给我一颗免费的星星!!!
> Respect Zhile大佬 , Respect Wizerd
感谢xyhelper、ninja和Wizerd大佬们的付出,敬礼!!!
感谢pandoraNext和Wizerd的付出敬礼
本项目支持:
1.xyhelper `proxy` 模式下的 `backend-api免费接口` 转为 `/v1/chat/completions` 接口,支持流式和非流式响应。
1.ninja `proxy` 模式下的 `backend-api` 转为 `/v1/chat/completions` 接口,支持流式和非流式响应。
2.xyhelper `proxy` 模式下的 `backend-api免费接口` 转为 `/v1/images/generations` 接口
2.ninja `proxy` 模式下的 `backend-api` 转为 `/v1/images/generations` 接口
3. 支持直接把refresh_token作为请求key方便接入one_api
4. 支持 gpt-4-mobile 、gpt-4-s 、动态支持所有的gpt-4-gizmo-XXX模型
* **xyhelper 的 免费 backend-api 接口,无需打码**
* **xyhelper接口每分钟最多请求30次介意请绕行**
* 我是开发者,我想自行修改功能->前往源码库 https://github.com/xyhelper/chatgpt-mirror-server
* 我没服务器,也没有官网账号,只想使用->前往官网购买我们运营的会员服务 https://www.xyhelper.com.cn
* 我想做商业用途,我想自己运营->老板里面请 https://www.xyhelper.com.cn/access
* 我有服务器,我想自己部署->请继续阅读本文档(有条件的话给个star吧)
4. 支持 gpt-4-mobile 、gpt-4-s 、基本所有的GPTS
如果本项目对你有帮助的话,请点个小星星吧~
@@ -43,7 +34,7 @@
- [x] 支持 gpt-3.5-turbo
- [x] 支持 gpts
- [x] 支持 动态gpts
- [x] 支持 流式输出
@@ -68,13 +59,13 @@
## 注意
> [!CAUTION]
> 1. 本项目的运行需要 xyhelper 的免费接口
> 1. 本项目的运行需要 ninja
>
> 2. 本项目实际为将来自 `/v1/chat/completions` 的请求转发到xyhelper免费接口的 `/backend-api/conversation` 接口,因此本项目并不支持高并发操作,请不要接入如 `沉浸式翻译` 等高并发项目。
> 2. 本项目实际为将来自 `/v1/chat/completions` 的请求转发到ninja的 `/backend-api/conversation` 接口,因此本项目并不支持高并发操作,请不要接入如 `沉浸式翻译` 等高并发项目。
>
> 3. 本项目支持使用apple平台的refresh_token作为请求key.
>
> 4. 本项目并不能绕过 OpenAI 和 xyhelper 官方的限制,只提供便利,不提供绕过。
> 4. 本项目并不能绕过 OpenAI 和 ninja 官方的限制,只提供便利,不提供绕过。
>
> 5. 提问的艺术:当出现项目不能正常运行时,请携带 `DEBUG` 级别的日志在 `Issue` 或者社区群内提问,否则将开启算命模式~
@@ -100,13 +91,13 @@
- `need_log_to_file`: 用于设置是否需要将日志输出到文件,可选值为:`true``false`,默认为 `true`,日志文件路径为:`./log/access.log`,默认每天会自动分割日志文件。
- `process_workers`: 用于设置进程数,如果不需要设置,可以保持不变,如果需要设置,可以设置为需要设置的值,默认`2`
- `process_workers`: 用于设置进程数,如果不需要设置,可以保持不变,如果需要设置,可以设置为需要设置的值,如果设置`1`,则会强制设置为单进程模式
- `process_threads`: 用于设置线程数,如果不需要设置,可以保持不变,如果需要设置,可以设置为需要设置的值,默认`2`
- `process_threads`: 用于设置线程数,如果不需要设置,可以保持不变,如果需要设置,可以设置为需要设置的值,如果设置`1`,则会强制设置为单线程模式
- `upstream_base_url`: xyhelper 的免费接口地址,如:`https://demo.xyhelper.cn`,注意:不要以 `/` 结尾。
- `upstream_base_url`: ninja 的部署地址,如:`https://pandoranext.com`,注意:不要以 `/` 结尾。可以填写为本项目可以访问到的 PandoraNext 的内网地址。
- `upstream_api_prefix`: 默认为""
- `upstream_api_prefix`: PandoraNext Proxy 模式下的 API 前缀
- `backend_container_url`: 用于dalle模型生成图片的时候展示所用需要设置为使用如 [ChatGPT-Next-Web](https://github.com/ChatGPTNextWebTeam/ChatGPT-Next-Web) 的用户可以访问到的本项目地址,如:`http://1.2.3.4:50011`,同原环境变量中的 `UPLOAD_BASE_URL`
@@ -152,8 +143,7 @@ PS. 注意arkose_urls中的地址需要支持PandoraNext的Arkose Token获取
- `enableOai`:用于设置是否使用官网通过refresh_token刷新access_token仅在 `enableOai``true` 时生效。
- `xyhelper_refreshToAccess_Url`:用于设置使用xyhelper来进行使用refresh_token刷新access_token,enableOai为false的时候必填
- 默认为"https://demo.xyhelper.cn/applelogin"
- `ninja_refreshToAccess_Url`:用于设置使用ninja来进行使用refresh_token刷新access_token,enableOai为false的时候必填
- `redis`
@@ -186,7 +176,7 @@ PS. 注意arkose_urls中的地址需要支持PandoraNext的Arkose Token获取
}
```
2. 直接请求的时候加上相应的gpt-4-gizmo-XXXXXX等同于上面的id的值
2. 直接请求的时候加上相应的gpt-4-gizmo-XXXXXX等同于上面的id的值
```json
{
"stream":true,
@@ -203,7 +193,7 @@ PS. 注意arkose_urls中的地址需要支持PandoraNext的Arkose Token获取
请求方式:`POST`
请求头:正常携带 `Authorization``Content-Type` 即可,`Authorization` 的值为 `Bearer <refresh_token>``Content-Type` 的值为 `application/json`
请求头:正常携带 `Authorization``Content-Type` 即可,`Authorization` 的值为 `Bearer <ninja 的 fk>``Content-Type` 的值为 `application/json`
请求体格式示例:
@@ -404,4 +394,4 @@ services:
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=Yanyutin753/xyhelperV1Api_refresh&type=Date)](https://star-history.com/#Yanyutin753/xyhelperV1Api_refresh&Date)
[![Star History Chart](https://api.star-history.com/svg?repos=Yanyutin753/ninjaToV1Api_refresh&type=Date)](https://star-history.com/#Yanyutin753/ninjaToV1Api_refresh&Date)

View File

@@ -4,7 +4,7 @@
"process_workers": 2,
"process_threads": 2,
"proxy": "",
"upstream_base_url": "https://demo.xyhelper.cn",
"upstream_base_url": "",
"upstream_api_prefix": "",
"backend_container_url": "",
"backend_container_api_prefix": "",
@@ -26,8 +26,8 @@
},
"refresh_ToAccess": {
"stream_sleep_time": 0,
"enableOai":"false",
"xyhelper_refreshToAccess_Url": "https://demo.xyhelper.cn/applelogin"
"enableOai":"true",
"ninja_refreshToAccess_Url": ""
},
"redis": {
"host": "redis",

View File

@@ -1 +1,5 @@
{}
{
"gpt-4-classic": {
"id":"g-YyyyMT9XH-chatgpt-classic"
}
}

View File

@@ -2,7 +2,7 @@ version: '3'
services:
backend-to-api:
image: yangclivia/pandora-to-api:0.7.8
image: yangclivia/pandora-to-api:0.7.7
restart: always
ports:
- "50011:33333"

View File

115
main.py
View File

@@ -62,10 +62,10 @@ BOT_MODE_ENABLED_CODE_BLOCK_OUTPUT = BOT_MODE.get('enabled_plugin_output', 'fals
BOT_MODE_ENABLED_PLAIN_IMAGE_URL_OUTPUT = BOT_MODE.get('enabled_plain_image_url_output', 'false').lower() == 'true'
# xyhelperToV1Api_refresh
# ninjaToV1Api_refresh
REFRESH_TOACCESS = CONFIG.get('refresh_ToAccess', {})
REFRESH_TOACCESS_ENABLEOAI = REFRESH_TOACCESS.get('enableOai', 'true').lower() == 'true'
REFRESH_TOACCESS_XYHELPER_REFRESHTOACCESS_URL = REFRESH_TOACCESS.get('xyhelper_refreshToAccess_Url', '')
REFRESH_TOACCESS_NINJA_REFRESHTOACCESS_URL = REFRESH_TOACCESS.get('ninja_refreshToAccess_Url', '')
STEAM_SLEEP_TIME = REFRESH_TOACCESS.get('steam_sleep_time', 0)
NEED_DELETE_CONVERSATION_AFTER_RESPONSE = CONFIG.get('need_delete_conversation_after_response',
@@ -198,15 +198,12 @@ def oaiGetAccessToken(refresh_token):
return None
# xyhelper获得access_token
def xyhelperGetAccessToken(getAccessTokenUrl, refresh_token):
# ninja获得access_token
def ninjaGetAccessToken(getAccessTokenUrl, refresh_token):
try:
logger.info("将通过这个网址请求access_token" + getAccessTokenUrl)
data = {
'refresh_token': refresh_token,
}
response = requests.post(getAccessTokenUrl, data=data)
headers = {"Authorization": "Bearer " + refresh_token}
response = requests.post(getAccessTokenUrl, headers=headers)
if not response.ok:
logger.error("Request 失败: " + response.text.strip())
return None
@@ -230,7 +227,7 @@ def updateGptsKey():
if REFRESH_TOACCESS_ENABLEOAI:
access_token = oaiGetAccessToken(KEY_FOR_GPTS_INFO)
else:
access_token = xyhelperGetAccessToken(REFRESH_TOACCESS_XYHELPER_REFRESHTOACCESS_URL, KEY_FOR_GPTS_INFO)
access_token = ninjaGetAccessToken(REFRESH_TOACCESS_NINJA_REFRESHTOACCESS_URL, KEY_FOR_GPTS_INFO)
if access_token.startswith("eyJhb"):
KEY_FOR_GPTS_INFO_ACCESS_TOKEN = access_token
logging.info("KEY_FOR_GPTS_INFO_ACCESS_TOKEN被更新:" + KEY_FOR_GPTS_INFO_ACCESS_TOKEN)
@@ -255,7 +252,7 @@ def fetch_gizmo_info(base_url, proxy_api_prefix, model_id):
# 将配置添加到全局列表
def add_config_to_global_list(base_url, proxy_api_prefix, gpts_data):
global gpts_configurations
updateGptsKey() # cSpell:ignore Gpts
updateGptsKey()
# print(f"gpts_data: {gpts_data}")
for model_name, model_info in gpts_data.items():
# print(f"model_name: {model_name}")
@@ -271,11 +268,12 @@ def add_config_to_global_list(base_url, proxy_api_prefix, gpts_data):
else:
logger.info(f"Fetching gpts info for {model_name}, {model_id}")
gizmo_info = fetch_gizmo_info(base_url, proxy_api_prefix, model_id)
# 如果成功获取到数据,则将其存入 Redis
if gizmo_info:
redis_client.set(model_id, str(gizmo_info))
logger.info(f"Cached gizmo info for {model_name}, {model_id}")
# 检查模型名称是否已经在列表中
if gizmo_info and not any(d['name'] == model_name for d in gpts_configurations):
gpts_configurations.append({
@@ -287,6 +285,7 @@ def add_config_to_global_list(base_url, proxy_api_prefix, gpts_data):
logger.info(f"Model already exists in the list, skipping...")
def generate_gpts_payload(model, messages):
model_config = find_model_config(model)
if model_config:
@@ -322,11 +321,9 @@ scheduler.start()
# PANDORA_UPLOAD_URL = 'files.pandoranext.com'
VERSION = '0.7.8.3'
VERSION = '0.7.7.3'
# VERSION = 'test'
UPDATE_INFO = 'flask直接启动解决部分机cpu占用过大问题'
# UPDATE_INFO = '【仅供临时测试使用】 '
# 解析响应中的信息
@@ -405,7 +402,7 @@ with app.app_context():
logger.info(f"REFRESH_TOACCESS_ENABLEOAI: {REFRESH_TOACCESS_ENABLEOAI}")
if not REFRESH_TOACCESS_ENABLEOAI:
logger.info(f"REFRESH_TOACCESS_XYHELPER_REFRESHTOACCESS_URL: {REFRESH_TOACCESS_XYHELPER_REFRESHTOACCESS_URL}")
logger.info(f"REFRESH_TOACCESS_NINJA_REFRESHTOACCESS_URL: {REFRESH_TOACCESS_NINJA_REFRESHTOACCESS_URL}")
if BOT_MODE_ENABLED:
logger.info(f"enabled_markdown_image_output: {BOT_MODE_ENABLED_MARKDOWN_IMAGE_OUTPUT}")
@@ -413,10 +410,10 @@ with app.app_context():
logger.info(f"enabled_bing_reference_output: {BOT_MODE_ENABLED_BING_REFERENCE_OUTPUT}")
logger.info(f"enabled_plugin_output: {BOT_MODE_ENABLED_CODE_BLOCK_OUTPUT}")
# xyhelperToV1Api_refresh
# ninjaToV1Api_refresh
logger.info(f"REFRESH_TOACCESS_ENABLEOAI: {REFRESH_TOACCESS_ENABLEOAI}")
logger.info(f"REFRESH_TOACCESS_XYHELPER_REFRESHTOACCESS_URL: {REFRESH_TOACCESS_XYHELPER_REFRESHTOACCESS_URL}")
logger.info(f"REFRESH_TOACCESS_NINJA_REFRESHTOACCESS_URL: {REFRESH_TOACCESS_NINJA_REFRESHTOACCESS_URL}")
logger.info(f"STEAM_SLEEP_TIME: {STEAM_SLEEP_TIME}")
if not BASE_URL:
@@ -870,8 +867,8 @@ def send_text_prompt_and_get_response(messages, api_key, stream, model):
ori_model_name = model_config.get('ori_name', model)
logger.info(f"原模型名: {ori_model_name}")
else:
ori_model_name = model
logger.info(f"请求模型名: {model}")
ori_model_name = model
if ori_model_name == 'gpt-4-s':
payload = {
# 构建 payload
@@ -968,8 +965,7 @@ def send_text_prompt_and_get_response(messages, api_key, stream, model):
"force_paragen": False,
"force_rate_limit": False
}
logger.debug('KEY_FOR_GPTS_INFO Or Request Model is not accessible')
logger.debug('KEY_FOR_GPTS_INFO is not accessible')
else:
payload = generate_gpts_payload(model, formatted_messages)
if not payload:
@@ -1145,7 +1141,7 @@ def replace_sandbox(text, conversation_id, message_id, api_key):
sandbox_path = match.group(1)
download_url = get_download_url(conversation_id, message_id, sandbox_path)
if download_url == None:
return "\n```\nError: 沙箱文件下载失败,这可能是因为您的帐号启用了隐私模式\n```"
return "\n```\nError: 沙箱文件下载失败,这可能是因为您启用了隐私模式\n```"
file_name = extract_filename(download_url)
timestamped_file_name = timestamp_filename(file_name)
if USE_OAIUSERCONTENT_URL == False:
@@ -2302,8 +2298,6 @@ def chat_completions():
accessible_model_list = get_accessible_model_list()
if model not in accessible_model_list and not 'gpt-4-gizmo-' in model:
return jsonify({"error": "model is not accessible"}), 401
elif 'gpt-4-gizmo-' in model and not KEY_FOR_GPTS_INFO:
return jsonify({"error": "key_for_gpts_info is not accessible"}), 400
stream = data.get('stream', False)
@@ -2320,7 +2314,7 @@ def chat_completions():
if REFRESH_TOACCESS_ENABLEOAI:
api_key = oaiGetAccessToken(api_key)
else:
api_key = xyhelperGetAccessToken(REFRESH_TOACCESS_XYHELPER_REFRESHTOACCESS_URL, api_key)
api_key = ninjaGetAccessToken(REFRESH_TOACCESS_NINJA_REFRESHTOACCESS_URL, api_key)
if not api_key.startswith("eyJhb"):
return jsonify({"error": "refresh_token is wrong or refresh_token url is wrong!"}), 401
add_to_dict(refresh_token, api_key)
@@ -2422,36 +2416,32 @@ def chat_completions():
ori_model_name = model_config.get('ori_name', model)
input_tokens = count_total_input_words(messages, ori_model_name)
comp_tokens = count_tokens(all_new_text, ori_model_name)
if input_tokens >= 100 and comp_tokens <= 0:
# 返回错误消息和状态码429
error_response = {"error": "空回复"}
return jsonify(error_response), 429
else:
response_json = {
"id": generate_unique_id("chatcmpl"),
"object": "chat.completion",
"created": int(time.time()), # 使用当前时间戳
"model": model, # 使用请求中指定的模型
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": all_new_text # 使用累积的文本
},
"finish_reason": "stop"
}
],
"usage": {
# 这里的 token 计数需要根据实际情况计算
"prompt_tokens": input_tokens,
"completion_tokens": comp_tokens,
"total_tokens": input_tokens + comp_tokens
},
"system_fingerprint": None
}
# 返回 JSON 响应
return jsonify(response_json)
response_json = {
"id": generate_unique_id("chatcmpl"),
"object": "chat.completion",
"created": int(time.time()), # 使用当前时间戳
"model": model, # 使用请求中指定的模型
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": all_new_text # 使用累积的文本
},
"finish_reason": "stop"
}
],
"usage": {
# 这里的 token 计数需要根据实际情况计算
"prompt_tokens": input_tokens,
"completion_tokens": comp_tokens,
"total_tokens": input_tokens + comp_tokens
},
"system_fingerprint": None
}
# 返回 JSON 响应
return jsonify(response_json)
else:
return Response(generate(), mimetype='text/event-stream')
@@ -2466,8 +2456,6 @@ def images_generations():
accessible_model_list = get_accessible_model_list()
if model not in accessible_model_list and not 'gpt-4-gizmo-' in model:
return jsonify({"error": "model is not accessible"}), 401
elif 'gpt-4-gizmo-' in model and not KEY_FOR_GPTS_INFO:
return jsonify({"error": "key_for_gpts_info is not accessible"}), 400
prompt = data.get('prompt', '')
@@ -2491,7 +2479,7 @@ def images_generations():
refresh_token = api_key
api_key = oaiGetAccessToken(api_key)
else:
api_key = xyhelperGetAccessToken(REFRESH_TOACCESS_XYHELPER_REFRESHTOACCESS_URL, api_key)
api_key = ninjaGetAccessToken(REFRESH_TOACCESS_NINJA_REFRESHTOACCESS_URL, api_key)
if not api_key.startswith("eyJhb"):
return jsonify({"error": "refresh_token is wrong or refresh_token url is wrong!"}), 401
add_to_dict(refresh_token, api_key)
@@ -2671,7 +2659,7 @@ def updateRefresh_dict():
if REFRESH_TOACCESS_ENABLEOAI:
access_token = oaiGetAccessToken(key)
else:
access_token = xyhelperGetAccessToken(REFRESH_TOACCESS_XYHELPER_REFRESHTOACCESS_URL, key)
access_token = ninjaGetAccessToken(REFRESH_TOACCESS_NINJA_REFRESHTOACCESS_URL, key)
if not access_token.startswith("eyJhb"):
logger.debug("refresh_token is wrong or refresh_token url is wrong!")
error_num += 1
@@ -2679,18 +2667,17 @@ def updateRefresh_dict():
success_num += 1
logging.info("更新成功: " + str(success_num) + ", 失败: " + str(error_num))
logger.info(f"==========================================")
logging.info("开始更新KEY_FOR_GPTS_INFO_ACCESS_TOKEN和GPTS配置信息.......")
logging.info("开始更新KEY_FOR_GPTS_INFO_ACCESS_TOKEN和GPTS配置信息......")
# 加载配置并添加到全局列表
gpts_data = load_gpts_config("./data/gpts.json")
add_config_to_global_list(BASE_URL, PROXY_API_PREFIX, gpts_data)
accessible_model_list = get_accessible_model_list()
logger.info(f"当前可用 GPTS 列表: {accessible_model_list}")
# 检查列表中是否有重复的模型名称
if len(accessible_model_list) != len(set(accessible_model_list)):
raise Exception("检测到重复的模型名称,请检查环境变量或配置文件......")
raise Exception("检测到重复的模型名称,请检查环境变量或配置文件")
logging.info("更新KEY_FOR_GPTS_INFO_ACCESS_TOKEN和GPTS配置信息成功......")
logger.info(f"当前可用 GPTS 列表: {accessible_model_list}")
logger.info(f"==========================================")

View File

@@ -11,7 +11,7 @@ if [ -z "$PROCESS_WORKERS" ]; then
export PROCESS_WORKERS
if [ -z "$PROCESS_WORKERS" ]; then
PROCESS_WORKERS=2
PROCESS_WORKERS=1
fi
fi
@@ -32,4 +32,3 @@ echo "PROCESS_THREADS: ${PROCESS_THREADS}"
# 启动 Gunicorn 并使用 tee 命令同时输出日志到文件和控制台
exec gunicorn -w ${PROCESS_WORKERS} --threads ${PROCESS_THREADS} --bind 0.0.0.0:33333 main:app --access-logfile - --error-logfile - --timeout 60