22 Commits

Author SHA1 Message Date
Clivia
81724dae50 0.7.9.4 修复空回复,支持更多文件类型 2024-07-16 15:07:46 +08:00
Yanyutin753
3cc275502a 支持最新的gpt-4-o模型,并重定向gpt-4-mobile到gpt-4-s 2024-05-16 19:34:44 +08:00
Clivia
21fd5b81be 支持 gpt-4o 2024-05-14 18:49:40 +08:00
Clivia
9017ec892f 支持最新的gpt-4o模型 2024-05-14 18:44:47 +08:00
Yanyutin753
12f7d616d7 feat gpt-4-o 支持上传文件 2024-05-14 13:56:58 +08:00
Yanyutin753
10782fbe1f 支持最新的gpt-4-o模型 2024-05-14 13:07:19 +08:00
Clivia
8a9932b18d 优化部署说明 2024-04-06 00:33:54 +08:00
Clivia
4b706bfb8d 更新部署说明 2024-04-06 00:30:26 +08:00
Clivia
7a1d7541bf 优化部署说明 2024-04-05 17:44:28 +08:00
Clivia
39d394e28b 优化部署说明 2024-04-05 01:16:30 +08:00
Clivia
816e78ab81 适配调用team对话,提供查询ChatGPT-Account-ID的/getAccountID接口 2024-04-04 22:10:38 +08:00
Clivia
81d32e753a Update oaifree-docker-image.yml 2024-04-04 22:02:12 +08:00
Yanyutin753
3c9b6c12cc 适配调用team对话,提供查询ChatGPT-Account-ID的/getAccountID接口 2024-04-04 22:01:45 +08:00
Yanyutin753
6530dc6029 Merge branch 'main' of https://github.com/Yanyutin753/RefreshToV1Api 2024-04-04 16:35:45 +08:00
Yanyutin753
a23f6a6440 优化部署,直接flask启动 2024-04-04 16:35:35 +08:00
Clivia
b0ec95520d 优化部署说明 2024-04-04 16:32:40 +08:00
Clivia
2c9b3d72f6 Update oaifree-docker-image.yml 2024-04-04 15:45:57 +08:00
Clivia
faa3c2c825 simply use flask 2024-04-04 15:43:00 +08:00
Clivia
62925a7f72 优化部署说明 2024-04-04 12:00:23 +08:00
Clivia
be60a8fe71 优化部署说明 2024-04-04 10:49:59 +08:00
Clivia
17bfdb5dae 优化部署说明 2024-04-04 10:48:58 +08:00
Yanyutin753
ee0ce60272 fix BUG 2024-04-04 10:27:47 +08:00
7 changed files with 203 additions and 119 deletions

View File

@@ -42,5 +42,6 @@ jobs:
push: true
tags: |
yangclivia/pandora-to-api:${{ steps.tag_name.outputs.tag }}
yangclivia/pandora-to-api:0.7.9
platforms: linux/amd64,linux/arm64
build-args: TARGETPLATFORM=${{ matrix.platform }}

3
.idea/misc.xml generated
View File

@@ -1,4 +1,7 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="Black">
<option name="sdkName" value="Python 3.8 (pythonProject7)" />
</component>
<component name="ProjectRootManager" version="2" project-jdk-name="Python 3.8 (pythonProject7)" project-jdk-type="Python SDK" />
</project>

View File

@@ -10,15 +10,13 @@ COPY . /app
# 设置环境变量
ENV PYTHONUNBUFFERED=1
RUN chmod +x /app/start.sh
RUN apt update && apt install -y jq
RUN chmod +x /app/main.py
# # 设置 pip 源为清华大学镜像
# RUN pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
# 安装任何所需的依赖项
RUN pip install --no-cache-dir flask flask_apscheduler gunicorn requests Pillow flask-cors tiktoken fake_useragent redis websocket-client pysocks requests[socks] websocket-client[optional]
RUN pip install --no-cache-dir flask flask_apscheduler requests Pillow flask-cors tiktoken fake_useragent redis websocket-client pysocks requests[socks] websocket-client[optional]
# 在容器启动时运行 Flask 应用
CMD ["/app/start.sh"]
CMD ["python3", "main.py"]

View File

@@ -1,4 +1,7 @@
## 项目简介
# [RefreshToV1Api](https://github.com/Yanyutin753/RefreshToV1Api)
![Docker Image Size (tag)](https://img.shields.io/docker/image-size/yangclivia/pandora-to-api/0.7.8)![Docker Pulls](https://img.shields.io/docker/pulls/yangclivia/pandora-to-api)[![GitHub Repo stars](https://img.shields.io/github/stars/Yanyutin753/RefreshToV1Api?style=social)](https://github.com/Yanyutin753/refresh-gpt-chat/stargazers)
## [项目简介](https://github.com/Yanyutin753/RefreshToV1Api)
> [!IMPORTANT]
>
@@ -14,9 +17,11 @@
3. 支持直接把refresh_token作为请求key方便接入one_api
4. 支持 gpt-4-mobile 、gpt-4-s 、基本所有的GPTS
4. 支持 gpt-4o 、gpt-4-s 、基本所有的GPTS
* **oaiFree 的 免费 backend-api 接口,无需打码**
* **oaiFree 的 backend-api 接口,无需打码**
* **oaiFree 的 backend-api 接口只支持Chatgpt Plus账号**
* 之后可能跟[Linux.do](https://linux.do/latest)论坛挂钩,请提前做好准备
@@ -34,11 +39,11 @@
- [x] 支持 gpt-4-s
- [x] 支持 gpt-4-mobile
- [x] 支持 gpt-4o
- [x] 支持 gpt-3.5-turbo
- [x] 暂不 支持 gpts
- [x] 支持 gpts
- [x] 支持 流式输出
@@ -85,20 +90,19 @@
4. gpt-3.5-turbo
## Docker-Compose 部署
## 部署说明
<details>
### Docker-Compose 部署
仓库内已包含相关文件和目录,拉到本地后修改 docker-compose.yml 文件里的环境变量后运行`docker-compose up -d`即可。
## config.json 变量说明:
### config.json 变量说明:
- `log_level`: 用于设置日志等级,可选值为:`DEBUG``INFO``WARNING``ERROR`,默认为 `DEBUG`
- `need_log_to_file`: 用于设置是否需要将日志输出到文件,可选值为:`true``false`,默认为 `true`,日志文件路径为:`./log/access.log`,默认每天会自动分割日志文件。
- `process_workers`: 用于设置进程数,如果不需要设置,可以保持不变,如果需要设置,可以设置为需要设置的值,默认为 `2`
- `process_threads`: 用于设置线程数,如果不需要设置,可以保持不变,如果需要设置,可以设置为需要设置的值,默认为 `2`
- `upstream_base_url`: oaiFree 的接口地址,如:`https://chat.oaifree.com`,注意:不要以 `/` 结尾。
- `upstream_api_prefix`: 默认为["dad04481-fa3f-494e-b90c-b822128073e5"],之后可多填
@@ -147,7 +151,7 @@ PS. 注意arkose_urls中的地址需要支持PandoraNext的Arkose Token获取
- `enableOai`:用于设置是否使用官网通过refresh_token刷新access_token仅在 `enableOai``true` 时生效。
- `oaiFree_refreshToAccess_Url`:用于设置使用oaiFree来进行使用refresh_token刷新access_token,enableOai为false的时候必填
- `oaifree_refreshToAccess_Url`:用于设置使用oaiFree来进行使用refresh_token刷新access_token,enableOai为false的时候必填
- 默认为"https://token.oaifree.com/api/auth/refresh"
- `redis`
@@ -160,7 +164,7 @@ PS. 注意arkose_urls中的地址需要支持PandoraNext的Arkose Token获取
- `db`: Redis的数据库默认0如有特殊需求你可以将此值设置为其他数据库
## GPTS配置说明
### GPTS配置说明
如果需要使用 GPTS需要修改 `gpts.json` 文件其中每个对象的key即为调用对应 GPTS 的时候使用的模型名称,而 `id` 则为对应的模型id`id` 对应每个 GPTS 的链接的后缀。配置多个GPTS的时候用逗号隔开。
@@ -181,7 +185,7 @@ PS. 注意arkose_urls中的地址需要支持PandoraNext的Arkose Token获取
注意:使用该配置的时候需要保证正确填写 `docker-compose.yml` 的环境变量 `KEY_FOR_GPTS_INFO`,同时该变量设置的 `key` 允许访问所有配置的 GPTS。
## 绘图接口使用说明
### 绘图接口使用说明
接口URI`/v1/images/generations`
@@ -218,7 +222,7 @@ PS. 注意arkose_urls中的地址需要支持PandoraNext的Arkose Token获取
}
```
## 文件识别接口使用说明
### 文件识别接口使用说明
调用方式同官方 `gpt-4-vision-preview` API
@@ -317,8 +321,18 @@ PS. 注意arkose_urls中的地址需要支持PandoraNext的Arkose Token获取
}
}
```
### 获取ChatGPT-Account-ID接口
## 示例
接口URI`/getAccountID`
请求方式:`POST`
```
请求头加上
AuthorizationBearer refresh_token 或 access_token
```
### 示例
以ChatGPT-Next-Web项目的docker-compose部署为例这里提供一个简单的部署配置文件示例
@@ -335,6 +349,8 @@ services:
- CUSTOM_MODELS=+gpt-4-s,+gpt-4-mobile,+<gpts.json 中的模型名>
```
</details>
## 功能演示
<details>
@@ -377,11 +393,12 @@ services:
> * 本项目只提供转发接口🥰
> * 开源项目不易,请点个星星吧!!!
### 新增群聊,点了⭐️可以进群讨论部署,我把你们拉进群,无广,广子踢掉
<img src="https://github.com/Yanyutin753/PandoraNext-TokensTool/assets/132346501/6544e8ed-6673-48f9-95a6-c13255acbab1" width="300" height="300">
## Sponsor
### 如果你觉得我的开源项目对你有帮助,可以赞助我一杯咖啡嘛,十分感谢!!!
<img src="https://github.com/Yanyutin753/RefreshToV1Api/assets/132346501/e5ab5e80-1cf2-4822-ae36-f9d0b11ed1b1" width="300" height="300">
### 请给我一个免费的⭐吧!!!
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=Yanyutin753/oaiFreeV1Api_refresh&type=Date)](https://star-history.com/#Yanyutin753/oaiFreeV1Api_refresh&Date)
[![Star History Chart](https://api.star-history.com/svg?repos=Yanyutin753/RefreshToV1Api&type=Date)](https://star-history.com/#Yanyutin753/oaiFreeV1Api_refresh&Date)

View File

@@ -1,8 +1,6 @@
{
"log_level": "INFO",
"need_log_to_file": "true",
"process_workers": 2,
"process_threads": 2,
"proxy": "",
"upstream_base_url": "https://chat.oaifree.com",
"upstream_api_prefix": ["dad04481-fa3f-494e-b90c-b822128073e5"],
@@ -12,6 +10,7 @@
"gpt_4_s_new_name": "gpt-4-s",
"gpt_4_mobile_new_name": "gpt-4-mobile,dall-e-3",
"gpt_3_5_new_name": "gpt-3.5-turbo",
"gpt_4_o_new_name": "gpt-4-o,gpt-4o",
"need_delete_conversation_after_response": "true",
"use_oaiusercontent_url": "false",
"custom_arkose_url": "false",

216
main.py
View File

@@ -1,32 +1,22 @@
# 导入所需的库
from flask import Flask, request, jsonify, Response, send_from_directory
from flask_cors import CORS, cross_origin
import requests
import uuid
import json
import time
import os
from datetime import datetime
from PIL import Image
import io
import re
import threading
from queue import Queue, Empty
import logging
from logging.handlers import TimedRotatingFileHandler
import uuid
import hashlib
import requests
import json
import hashlib
from PIL import Image
from io import BytesIO
from urllib.parse import urlparse, urlunparse
import base64
from fake_useragent import UserAgent
import hashlib
import json
import logging
import mimetypes
import os
import uuid
from datetime import datetime
from io import BytesIO
from logging.handlers import TimedRotatingFileHandler
from queue import Queue
from urllib.parse import urlparse
import requests
from fake_useragent import UserAgent
from flask import Flask, request, jsonify, Response, send_from_directory
from flask_apscheduler import APScheduler
from flask_cors import CORS, cross_origin
# 读取配置文件
@@ -51,6 +41,7 @@ API_PREFIX = CONFIG.get('backend_container_api_prefix', '')
GPT_4_S_New_Names = CONFIG.get('gpt_4_s_new_name', 'gpt-4-s').split(',')
GPT_4_MOBILE_NEW_NAMES = CONFIG.get('gpt_4_mobile_new_name', 'gpt-4-mobile').split(',')
GPT_3_5_NEW_NAMES = CONFIG.get('gpt_3_5_new_name', 'gpt-3.5-turbo').split(',')
GPT_4_O_NEW_NAMES = CONFIG.get('gpt_4_o_new_name', 'gpt-4-o').split(',')
BOT_MODE = CONFIG.get('bot_mode', {})
BOT_MODE_ENABLED = BOT_MODE.get('enabled', 'false').lower() == 'true'
@@ -132,7 +123,6 @@ logger.addHandler(stream_handler)
# 创建FakeUserAgent对象
ua = UserAgent()
import random
import threading
# 开启线程锁
@@ -147,7 +137,6 @@ def getPROXY_API_PREFIX(lock):
return None
else:
return "/" + (PROXY_API_PREFIX[index % len(PROXY_API_PREFIX)])
index += 1
def generate_unique_id(prefix):
@@ -221,7 +210,6 @@ def oaiFreeGetAccessToken(getAccessTokenUrl, refresh_token):
'refresh_token': refresh_token,
}
response = requests.post(getAccessTokenUrl, data=data)
logging.info(response.text)
if not response.ok:
logger.error("Request 失败: " + response.text.strip())
return None
@@ -336,9 +324,9 @@ scheduler.start()
# PANDORA_UPLOAD_URL = 'files.pandoranext.com'
VERSION = '0.7.9.0'
VERSION = '0.7.9.4'
# VERSION = 'test'
UPDATE_INFO = '接入oaifree'
UPDATE_INFO = '修复空回复,支持更多文件类型'
# UPDATE_INFO = '【仅供临时测试使用】 '
with app.app_context():
@@ -449,7 +437,11 @@ with app.app_context():
"name": name.strip(),
"ori_name": "gpt-3.5-turbo"
})
for name in GPT_4_O_NEW_NAMES:
gpts_configurations.append({
"name": name.strip(),
"ori_name": "gpt-4-o"
})
logger.info(f"GPTS 配置信息")
# 加载配置并添加到全局列表
@@ -500,7 +492,6 @@ def get_token():
logger.error(f"请求异常: {e}")
raise Exception("获取 arkose token 失败")
return None
import os
@@ -614,7 +605,7 @@ def get_file_metadata(file_content, mime_type, api_key, proxy_api_prefix):
sha256_hash = hashlib.sha256(file_content).hexdigest()
logger.debug(f"sha256_hash: {sha256_hash}")
# 首先尝试从Redis中获取数据
cached_data = file_redis_client.get(sha256_hash)
cached_data = redis_client.get(sha256_hash)
if cached_data is not None:
# 如果在Redis中找到了数据解码后直接返回
logger.info(f"从Redis中获取到文件缓存数据")
@@ -653,7 +644,7 @@ def get_file_metadata(file_content, mime_type, api_key, proxy_api_prefix):
new_file_data['height'] = height
# 将新的文件数据存入Redis
file_redis_client.set(sha256_hash, json.dumps(new_file_data))
redis_client.set(sha256_hash, json.dumps(new_file_data))
return new_file_data
@@ -687,7 +678,7 @@ def get_file_extension(mime_type):
"text/x-script.python": ".py",
# 其他 MIME 类型和扩展名...
}
return extension_mapping.get(mime_type, "")
return extension_mapping.get(mime_type, mimetypes.guess_extension(mime_type))
my_files_types = [
@@ -702,7 +693,7 @@ my_files_types = [
# 定义发送请求的函数
def send_text_prompt_and_get_response(messages, api_key, stream, model, proxy_api_prefix):
def send_text_prompt_and_get_response(messages, api_key, account_id, stream, model, proxy_api_prefix):
url = f"{BASE_URL}{proxy_api_prefix}/backend-api/conversation"
headers = {
"Authorization": f"Bearer {api_key}"
@@ -720,7 +711,7 @@ def send_text_prompt_and_get_response(messages, api_key, stream, model, proxy_ap
message_id = str(uuid.uuid4())
content = message.get("content")
if isinstance(content, list) and ori_model_name != 'gpt-3.5-turbo':
if isinstance(content, list) and ori_model_name not in ['gpt-3.5-turbo']:
logger.debug(f"gpt-vision 调用")
new_parts = []
attachments = []
@@ -848,13 +839,9 @@ def send_text_prompt_and_get_response(messages, api_key, stream, model, proxy_ap
"action": "next",
"messages": formatted_messages,
"parent_message_id": str(uuid.uuid4()),
"model": "gpt-4-mobile",
"model": "gpt-4",
"timezone_offset_min": -480,
"suggestions": [
"Give me 3 ideas about how to plan good New Years resolutions. Give me some that are personal, family, and professionally-oriented.",
"Write a text asking a friend to be my plus-one at a wedding next month. I want to keep it super short and casual, and offer an out.",
"Design a database schema for an online merch store.",
"Compare Gen Z and Millennial marketing strategies for sunglasses."],
"suggestions": [],
"history_and_training_disabled": False,
"conversation_mode": {"kind": "primary_assistant"}, "force_paragen": False, "force_rate_limit": False
}
@@ -880,6 +867,28 @@ def send_text_prompt_and_get_response(messages, api_key, stream, model, proxy_ap
"force_paragen": False,
"force_rate_limit": False
}
elif ori_model_name == 'gpt-4-o':
payload = {
# 构建 payload
"action": "next",
"messages": formatted_messages,
"parent_message_id": str(uuid.uuid4()),
"model": "gpt-4o",
"timezone_offset_min": -480,
"suggestions": [
"What are 5 creative things I could do with my kids' art? I don't want to throw them away, but it's also so much clutter.",
"I want to cheer up my friend who's having a rough day. Can you suggest a couple short and sweet text messages to go with a kitten gif?",
"Come up with 5 concepts for a retro-style arcade game.",
"I have a photoshoot tomorrow. Can you recommend me some colors and outfit options that will look good on camera?"
],
"history_and_training_disabled": False,
"arkose_token": None,
"conversation_mode": {
"kind": "primary_assistant"
},
"force_paragen": False,
"force_rate_limit": False
}
elif 'gpt-4-gizmo-' in model:
payload = generate_gpts_payload(model, formatted_messages)
if not payload:
@@ -914,12 +923,17 @@ def send_text_prompt_and_get_response(messages, api_key, stream, model, proxy_ap
if NEED_DELETE_CONVERSATION_AFTER_RESPONSE:
logger.debug(f"是否保留会话: {NEED_DELETE_CONVERSATION_AFTER_RESPONSE == False}")
payload['history_and_training_disabled'] = True
if ori_model_name != 'gpt-3.5-turbo':
if ori_model_name not in ['gpt-3.5-turbo', 'gpt-4-o']:
if CUSTOM_ARKOSE:
token = get_token()
payload["arkose_token"] = token
# 在headers中添加新字段
headers["Openai-Sentinel-Arkose-Token"] = token
# 用于调用ChatGPT Team次数
if account_id:
headers["ChatGPT-Account-ID"] = account_id
logger.debug(f"headers: {headers}")
logger.debug(f"payload: {payload}")
response = requests.post(url, headers=headers, json=payload, stream=True)
@@ -1139,7 +1153,8 @@ def replace_sandbox(text, conversation_id, message_id, api_key, proxy_api_prefix
return replaced_text
def data_fetcher(upstream_response, data_queue, stop_event, last_data_time, api_key, chat_message_id, model, proxy_api_prefix):
def data_fetcher(upstream_response, data_queue, stop_event, last_data_time, api_key, chat_message_id, model,
proxy_api_prefix):
all_new_text = ""
first_output = True
@@ -1159,6 +1174,7 @@ def data_fetcher(upstream_response, data_queue, stop_event, last_data_time, api_
file_output_accumulating = False
execution_output_image_url_buffer = ""
execution_output_image_id_buffer = ""
message = None
try:
for chunk in upstream_response.iter_content(chunk_size=1024):
if stop_event.is_set():
@@ -1372,7 +1388,7 @@ def data_fetcher(upstream_response, data_queue, stop_event, last_data_time, api_
if is_complete_sandbox_format(file_output_buffer):
# 替换完整的引用格式
replaced_text = replace_sandbox(file_output_buffer, conversation_id,
message_id, api_key,proxy_api_prefix)
message_id, api_key, proxy_api_prefix)
# print(replaced_text) # 输出替换后的文本
new_text = replaced_text
file_output_accumulating = False
@@ -1487,7 +1503,8 @@ def data_fetcher(upstream_response, data_queue, stop_event, last_data_time, api_
execution_output_image_url_buffer = f"{UPLOAD_BASE_URL}/{today_image_url}"
else:
logger.error(f"下载图片失败: {image_download_response.text}")
logger.error(
f"下载图片失败: {image_download_response.text}")
execution_output_image_id_buffer = image_file_id
@@ -1756,13 +1773,19 @@ def chat_completions():
auth_header = request.headers.get('Authorization')
if not auth_header or not auth_header.startswith('Bearer '):
return jsonify({"error": "Authorization header is missing or invalid"}), 401
api_key = auth_header.split(' ')[1]
api_key = None
try:
api_key = auth_header.split(' ')[1].split(',')[0].strip()
account_id = auth_header.split(' ')[1].split(',')[1].strip()
logging.info(f"{api_key}:{account_id}")
except IndexError:
account_id = None
if not api_key.startswith("eyJhb"):
refresh_token = api_key
if api_key in refresh_dict:
logger.info(f"从缓存读取到api_key.........。")
api_key = refresh_dict.get(api_key)
else:
refresh_token = api_key
if REFRESH_TOACCESS_ENABLEOAI:
api_key = oaiGetAccessToken(api_key)
else:
@@ -1772,7 +1795,11 @@ def chat_completions():
add_to_dict(refresh_token, api_key)
logger.info(f"api_key: {api_key}")
upstream_response = send_text_prompt_and_get_response(messages, api_key, stream, model, proxy_api_prefix)
upstream_response = send_text_prompt_and_get_response(messages, api_key, account_id, stream, model,
proxy_api_prefix)
if upstream_response.status_code != 200:
return jsonify({"error": f"{upstream_response.text}"}), upstream_response.status_code
# 在非流式响应的情况下,我们需要一个变量来累积所有的 new_text
all_new_text = ""
@@ -1791,7 +1818,8 @@ def chat_completions():
# 启动数据处理线程
fetcher_thread = threading.Thread(target=data_fetcher, args=(
upstream_response, data_queue, stop_event, last_data_time, api_key, chat_message_id, model,proxy_api_prefix))
upstream_response, data_queue, stop_event, last_data_time, api_key, chat_message_id, model,
proxy_api_prefix))
fetcher_thread.start()
# 启动保活线程
@@ -1848,9 +1876,9 @@ def chat_completions():
fetcher_thread.join()
keep_alive_thread.join()
if conversation_id:
# print(f"准备删除的会话id {conversation_id}")
delete_conversation(conversation_id, api_key,proxy_api_prefix)
# if conversation_id:
# # print(f"准备删除的会话id {conversation_id}")
# delete_conversation(conversation_id, api_key,proxy_api_prefix)
if not stream:
# 执行流式响应的生成函数来累积 all_new_text
@@ -1906,6 +1934,7 @@ def images_generations():
return jsonify({"error": "PROXY_API_PREFIX is not accessible"}), 401
data = request.json
logger.debug(f"data: {data}")
api_key = None
# messages = data.get('messages')
model = data.get('model')
accessible_model_list = get_accessible_model_list()
@@ -1924,14 +1953,19 @@ def images_generations():
auth_header = request.headers.get('Authorization')
if not auth_header or not auth_header.startswith('Bearer '):
return jsonify({"error": "Authorization header is missing or invalid"}), 401
api_key = auth_header.split(' ')[1]
try:
api_key = auth_header.split(' ')[1].split(',')[0].strip()
account_id = auth_header.split(' ')[1].split(',')[1].strip()
logging.info(f"{api_key}:{account_id}")
except IndexError:
account_id = None
if not api_key.startswith("eyJhb"):
refresh_token = api_key
if api_key in refresh_dict:
logger.info(f"从缓存读取到api_key.........")
api_key = refresh_dict.get(api_key)
else:
if REFRESH_TOACCESS_ENABLEOAI:
refresh_token = api_key
api_key = oaiGetAccessToken(api_key)
else:
api_key = oaiFreeGetAccessToken(REFRESH_TOACCESS_OAIFREE_REFRESHTOACCESS_URL, api_key)
@@ -1951,7 +1985,10 @@ def images_generations():
}
]
upstream_response = send_text_prompt_and_get_response(messages, api_key, False, model,proxy_api_prefix)
upstream_response = send_text_prompt_and_get_response(messages, api_key, account_id, False, model, proxy_api_prefix)
if upstream_response.status_code != 200:
return jsonify({"error": f"{upstream_response.text}"}), upstream_response.status_code
# 在非流式响应的情况下,我们需要一个变量来累积所有的 new_text
all_new_text = ""
@@ -1971,6 +2008,7 @@ def images_generations():
conversation_id = ''
citation_buffer = ""
citation_accumulating = False
message = None
for chunk in upstream_response.iter_content(chunk_size=1024):
if chunk:
buffer += chunk.decode('utf-8')
@@ -1993,7 +2031,7 @@ def images_generations():
# print(f"data_json: {data_json}")
message = data_json.get("message", {})
if message == None:
if message is None:
logger.error(f"message 为空: data_json: {data_json}")
message_status = message.get("status")
@@ -2346,7 +2384,22 @@ def catch_all(path):
logger.debug(f"请求头: {request.headers}")
logger.debug(f"请求体: {request.data}")
return jsonify({"message": "Welcome to Inker's World"}), 200
html_string = f"""
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<p> Thanks for using RefreshToV1Api {VERSION}</p>
<p> 感谢Ink-Osier大佬的付出敬礼</p>
<p><a href="https://github.com/Yanyutin753/RefreshToV1Api">项目地址</a></p>
</body>
</html>
"""
return html_string, 500
@app.route('/images/<filename>')
@@ -2367,6 +2420,53 @@ def get_file(filename):
return send_from_directory('files', filename)
@app.route(f'/{API_PREFIX}/getAccountID' if API_PREFIX else '/getAccountID', methods=['POST'])
@cross_origin() # 使用装饰器来允许跨域请求
def getAccountID():
logger.info(f"New Account Request")
proxy_api_prefix = getPROXY_API_PREFIX(lock)
if proxy_api_prefix is None:
return jsonify({"error": "PROXY_API_PREFIX is not accessible"}), 401
auth_header = request.headers.get('Authorization')
if not auth_header or not auth_header.startswith('Bearer '):
return jsonify({"error": "Authorization header is missing or invalid"}), 401
api_key = auth_header.split(' ')[1].split(',')[0].strip()
if not api_key.startswith("eyJhb"):
refresh_token = api_key
if api_key in refresh_dict:
logger.info(f"从缓存读取到api_key.........")
api_key = refresh_dict.get(api_key)
else:
if REFRESH_TOACCESS_ENABLEOAI:
api_key = oaiGetAccessToken(api_key)
else:
api_key = oaiFreeGetAccessToken(REFRESH_TOACCESS_OAIFREE_REFRESHTOACCESS_URL, api_key)
if not api_key.startswith("eyJhb"):
return jsonify({"error": "refresh_token is wrong or refresh_token url is wrong!"}), 401
add_to_dict(refresh_token, api_key)
logger.info(f"api_key: {api_key}")
url = f"{BASE_URL}{proxy_api_prefix}/backend-api/accounts/check/v4-2023-04-27"
headers = {
"Authorization": "Bearer " + api_key
}
res = requests.get(url, headers=headers)
if res.status_code == 200:
data = res.json()
result = {"plus": set(), "team": set()}
for account_id, account_data in data["accounts"].items():
plan_type = account_data["account"]["plan_type"]
if plan_type == "team":
result[plan_type].add(account_id)
elif plan_type == "plus":
result[plan_type].add(account_id)
result = {plan_type: list(ids) for plan_type, ids in result.items()}
return jsonify(result)
else:
return jsonify({"error": "Request failed."}), 400
# 内置自动刷新access_token
def updateRefresh_dict():
success_num = 0

View File

@@ -1,34 +0,0 @@
#!/bin/bash
# 记录当前日期和时间
NOW=$(date +"%Y-%m-%d-%H-%M")
# 尝试从环境变量获取参数,如果不存在,则从 config.json 文件中读取
# 如果这些值仍然不存在,将它们设置为默认值
if [ -z "$PROCESS_WORKERS" ]; then
PROCESS_WORKERS=$(jq -r '.process_workers // empty' /app/data/config.json)
export PROCESS_WORKERS
if [ -z "$PROCESS_WORKERS" ]; then
PROCESS_WORKERS=2
fi
fi
if [ -z "$PROCESS_THREADS" ]; then
PROCESS_THREADS=$(jq -r '.process_threads // empty' /app/data/config.json)
export PROCESS_THREADS
if [ -z "$PROCESS_THREADS" ]; then
PROCESS_THREADS=2
fi
fi
export PROCESS_WORKERS
export PROCESS_THREADS
echo "PROCESS_WORKERS: ${PROCESS_WORKERS}"
echo "PROCESS_THREADS: ${PROCESS_THREADS}"
# 启动 Gunicorn 并使用 tee 命令同时输出日志到文件和控制台
exec gunicorn -w ${PROCESS_WORKERS} --threads ${PROCESS_THREADS} --bind 0.0.0.0:33333 main:app --access-logfile - --error-logfile -