mirror of
https://github.com/Yanyutin753/RefreshToV1Api.git
synced 2025-12-13 02:00:14 +08:00
Compare commits
18 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e3c0ed6881 | ||
|
|
20a0c67872 | ||
|
|
6483e64460 | ||
|
|
07aa5ec83b | ||
|
|
0f4ab22273 | ||
|
|
ba4e89101f | ||
|
|
876417a9d1 | ||
|
|
0564b10550 | ||
|
|
148e96108c | ||
|
|
afcca06c9d | ||
|
|
f5b5592f66 | ||
|
|
81ff100e9c | ||
|
|
256e6dbce6 | ||
|
|
7ce92ae642 | ||
|
|
d0cc050a51 | ||
|
|
05d5a1a13e | ||
|
|
bd0e470427 | ||
|
|
3d3d939e3c |
1
.github/workflows/ninja-image.yml
vendored
1
.github/workflows/ninja-image.yml
vendored
@@ -42,5 +42,6 @@ jobs:
|
||||
push: true
|
||||
tags: |
|
||||
yangclivia/pandora-to-api:${{ steps.tag_name.outputs.tag }}
|
||||
yangclivia/pandora-to-api:0.7.7
|
||||
platforms: linux/amd64,linux/arm64
|
||||
build-args: TARGETPLATFORM=${{ matrix.platform }}
|
||||
|
||||
1
.github/workflows/xyhelper-deploy.yml
vendored
1
.github/workflows/xyhelper-deploy.yml
vendored
@@ -42,5 +42,6 @@ jobs:
|
||||
push: true
|
||||
tags: |
|
||||
yangclivia/pandora-to-api:${{ steps.tag_name.outputs.tag }}
|
||||
yangclivia/pandora-to-api:0.7.8
|
||||
platforms: linux/amd64,linux/arm64
|
||||
build-args: TARGETPLATFORM=${{ matrix.platform }}
|
||||
|
||||
5
.gitignore
vendored
Normal file
5
.gitignore
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
|
||||
*.json
|
||||
*.log
|
||||
*.json
|
||||
*.log
|
||||
@@ -10,15 +10,13 @@ COPY . /app
|
||||
# 设置环境变量
|
||||
ENV PYTHONUNBUFFERED=1
|
||||
|
||||
RUN chmod +x /app/start.sh
|
||||
|
||||
RUN apt update && apt install -y jq
|
||||
RUN chmod +x /app/main.py
|
||||
|
||||
# # 设置 pip 源为清华大学镜像
|
||||
# RUN pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
|
||||
|
||||
# 安装任何所需的依赖项
|
||||
RUN pip install --no-cache-dir flask flask_apscheduler gunicorn requests Pillow flask-cors tiktoken fake_useragent redis websocket-client pysocks requests[socks] websocket-client[optional]
|
||||
RUN pip install --no-cache-dir flask flask_apscheduler requests Pillow flask-cors tiktoken fake_useragent redis websocket-client pysocks requests[socks] websocket-client[optional]
|
||||
|
||||
# 在容器启动时运行 Flask 应用
|
||||
CMD ["/app/start.sh"]
|
||||
CMD ["python3", "main.py"]
|
||||
|
||||
38
Readme.md
38
Readme.md
@@ -1,8 +1,9 @@
|
||||
## 项目简介
|
||||
|
||||
## 0.7.8 xyhelper项目简介
|
||||
[](https://github.com/Yanyutin753/refresh-gpt-chat/stargazers)
|
||||
> [!IMPORTANT]
|
||||
>
|
||||
> Respect `xyhelper` ,Respect `ninja` , Respect `Wizerd`!
|
||||
> 开源不易,请给我一颗免费的星星!!!
|
||||
|
||||
感谢xyhelper、ninja和Wizerd大佬们的付出,敬礼!!!
|
||||
|
||||
@@ -14,7 +15,7 @@
|
||||
|
||||
3. 支持直接把refresh_token作为请求key,方便接入one_api
|
||||
|
||||
4. 支持 gpt-4-mobile 、gpt-4-s 、基本所有的GPTS
|
||||
4. 支持 gpt-4-mobile 、gpt-4-s 、动态支持所有的gpt-4-gizmo-XXX模型
|
||||
|
||||
* **xyhelper 的 免费 backend-api 接口,无需打码**
|
||||
|
||||
@@ -166,7 +167,9 @@ PS. 注意,arkose_urls中的地址需要支持PandoraNext的Arkose Token获取
|
||||
|
||||
## GPTS配置说明
|
||||
|
||||
如果需要使用 GPTS,需要修改 `gpts.json` 文件,其中每个对象的key即为调用对应 GPTS 的时候使用的模型名称,而 `id` 则为对应的模型id,该 `id` 对应每个 GPTS 的链接的后缀。配置多个GPTS的时候用逗号隔开。
|
||||
### 使用 GPTS
|
||||
|
||||
1. 可修改 `gpts.json` 文件,其中每个对象的key即为调用对应 GPTS 的时候使用的模型名称,而 `id` 则为对应的模型id,该 `id` 对应每个 GPTS 的链接的后缀。配置多个GPTS的时候用逗号隔开。
|
||||
|
||||
例如:PandoraNext的官方 GPTS 的链接为:`https://chat.oaifree.com/g/g-CFsXuTRfy-pandoranextzhu-shou`,则该模型的 `id` 的值应为 `g-CFsXuTRfy-pandoranextzhu-shou`,而模型名可以自定义。
|
||||
|
||||
@@ -183,7 +186,16 @@ PS. 注意,arkose_urls中的地址需要支持PandoraNext的Arkose Token获取
|
||||
}
|
||||
```
|
||||
|
||||
注意:使用该配置的时候需要保证正确填写 `docker-compose.yml` 的环境变量 `KEY_FOR_GPTS_INFO`,同时该变量设置的 `key` 允许访问所有配置的 GPTS。
|
||||
2. 可直接请求的时候加上相应的gpt-4-gizmo-XXX,XXX等同于上面的id的值
|
||||
```json
|
||||
{
|
||||
"stream":true,
|
||||
"model":"gpt-4-gizmo-XXXX",
|
||||
"messages": [{"role": "user", "content": "你是什么模型"}]
|
||||
}
|
||||
```
|
||||
|
||||
注意:使用该配置的时候需要保证正确填写 `config.json` 文件的环境变量 `KEY_FOR_GPTS_INFO`,同时该变量设置的 `key` 允许访问所有配置的 GPTS。
|
||||
|
||||
## 绘图接口使用说明
|
||||
|
||||
@@ -324,19 +336,20 @@ PS. 注意,arkose_urls中的地址需要支持PandoraNext的Arkose Token获取
|
||||
|
||||
## 示例
|
||||
|
||||
以ChatGPT-Next-Web项目的docker-compose部署为例,这里提供一个简单的部署配置文件示例:
|
||||
以[ChatGPT-Next-Web](https://github.com/Yanyutin753/ChatGPT-Next-Web-LangChain-Gpt-4-All)项目插件版的docker-compose部署为例,支持完全适配项目,这里提供一个简单的部署配置文件示例:
|
||||
|
||||
```
|
||||
version: '3'
|
||||
services:
|
||||
chatgpt-next-web:
|
||||
image: yidadaa/chatgpt-next-web
|
||||
image: yangclivia/chatgpt-next-web-langchain
|
||||
ports:
|
||||
- "50013:3000"
|
||||
environment:
|
||||
- CUSTOM_MODELS=-all,+gpt-3.5-turbo,+gpt-4-s,+gpt-4-mobile,+gpt-4-vision-preview,+gpt-4-gizmo-XXX
|
||||
- OPENAI_API_KEY=<正确的refresh_token>
|
||||
- BASE_URL=<backend-to-api容器地址>
|
||||
- CUSTOM_MODELS=+gpt-4-s,+gpt-4-mobile,+<gpts.json 中的模型名>
|
||||
- CUSTOM_MODELS=-gpt-4-0613,-gpt-4-32k,-gpt-4-32k-0613,-gpt-4-turbo-preview,-gpt-4-1106-preview,-gpt-4-0125-preview,-gpt-3.5-turbo-0125,-gpt-3.5-turbo-0613,-gpt-3.5-turbo-1106,-gpt-3.5-turbo-16k,-gpt-3.5-turbo-16k-0613,+gpt-3.5-turbo,+gpt-4,+gpt-4-mobile,+gpt-4-vision-preview,+gpt-4-mobile,+<gpts.json 中的模型名>
|
||||
|
||||
```
|
||||
|
||||
@@ -351,17 +364,20 @@ services:
|
||||
|
||||

|
||||
|
||||
### 读文件
|
||||

|
||||
|
||||
### 绘图
|
||||
|
||||

|
||||

|
||||
|
||||
### GPT-4-Mobile
|
||||
|
||||

|
||||

|
||||
|
||||
### GPTS
|
||||
|
||||

|
||||

|
||||
|
||||
### Bot 模式
|
||||
|
||||
|
||||
1162
log/access.log
1162
log/access.log
File diff suppressed because it is too large
Load Diff
69
main.py
69
main.py
@@ -202,7 +202,7 @@ def oaiGetAccessToken(refresh_token):
|
||||
def xyhelperGetAccessToken(getAccessTokenUrl, refresh_token):
|
||||
try:
|
||||
logger.info("将通过这个网址请求access_token:" + getAccessTokenUrl)
|
||||
|
||||
|
||||
data = {
|
||||
'refresh_token': refresh_token,
|
||||
}
|
||||
@@ -271,20 +271,20 @@ def add_config_to_global_list(base_url, proxy_api_prefix, gpts_data):
|
||||
else:
|
||||
logger.info(f"Fetching gpts info for {model_name}, {model_id}")
|
||||
gizmo_info = fetch_gizmo_info(base_url, proxy_api_prefix, model_id)
|
||||
|
||||
# 如果成功获取到数据,则将其存入 Redis
|
||||
if gizmo_info:
|
||||
redis_client.set(model_id, str(gizmo_info))
|
||||
logger.info(f"Cached gizmo info for {model_name}, {model_id}")
|
||||
# 检查模型名称是否已经在列表中
|
||||
if not any(d['name'] == model_name for d in gpts_configurations):
|
||||
gpts_configurations.append({
|
||||
'name': model_name,
|
||||
'id': model_id,
|
||||
'config': gizmo_info
|
||||
})
|
||||
else:
|
||||
logger.info(f"Model already exists in the list, skipping...")
|
||||
|
||||
# 检查模型名称是否已经在列表中
|
||||
if gizmo_info and not any(d['name'] == model_name for d in gpts_configurations):
|
||||
gpts_configurations.append({
|
||||
'name': model_name,
|
||||
'id': model_id,
|
||||
'config': gizmo_info
|
||||
})
|
||||
else:
|
||||
logger.info(f"Model already exists in the list, skipping...")
|
||||
|
||||
|
||||
def generate_gpts_payload(model, messages):
|
||||
@@ -322,9 +322,11 @@ scheduler.start()
|
||||
# PANDORA_UPLOAD_URL = 'files.pandoranext.com'
|
||||
|
||||
|
||||
VERSION = '0.7.8'
|
||||
VERSION = '0.7.8.3'
|
||||
# VERSION = 'test'
|
||||
UPDATE_INFO = '项目将脱离ninja,使用xyhelper,xyhelper_refreshToAccess_Url等配置需修改'
|
||||
UPDATE_INFO = 'flask直接启动,解决部分机cpu占用过大问题'
|
||||
|
||||
|
||||
# UPDATE_INFO = '【仅供临时测试使用】 '
|
||||
|
||||
# 解析响应中的信息
|
||||
@@ -866,10 +868,10 @@ def send_text_prompt_and_get_response(messages, api_key, stream, model):
|
||||
# 检查是否有 ori_name
|
||||
if model_config:
|
||||
ori_model_name = model_config.get('ori_name', model)
|
||||
logger.info(f"原模型名: {ori_model_name}")
|
||||
else:
|
||||
ori_model_name = model
|
||||
logger.info(f"原模型名: {model}")
|
||||
logger.info(f"原模型名: {ori_model_name}")
|
||||
logger.info(f"请求模型名: {model}")
|
||||
if ori_model_name == 'gpt-4-s':
|
||||
payload = {
|
||||
# 构建 payload
|
||||
@@ -945,7 +947,28 @@ def send_text_prompt_and_get_response(messages, api_key, stream, model):
|
||||
logger.info(f"Model already exists in the list, skipping...")
|
||||
payload = generate_gpts_payload(model, formatted_messages)
|
||||
else:
|
||||
raise Exception('KEY_FOR_GPTS_INFO is not accessible')
|
||||
payload = {
|
||||
# 构建 payload
|
||||
"action": "next",
|
||||
"messages": formatted_messages,
|
||||
"parent_message_id": str(uuid.uuid4()),
|
||||
"model": "text-davinci-002-render-sha",
|
||||
"timezone_offset_min": -480,
|
||||
"suggestions": [
|
||||
"What are 5 creative things I could do with my kids' art? I don't want to throw them away, but it's also so much clutter.",
|
||||
"I want to cheer up my friend who's having a rough day. Can you suggest a couple short and sweet text messages to go with a kitten gif?",
|
||||
"Come up with 5 concepts for a retro-style arcade game.",
|
||||
"I have a photoshoot tomorrow. Can you recommend me some colors and outfit options that will look good on camera?"
|
||||
],
|
||||
"history_and_training_disabled": False,
|
||||
"arkose_token": None,
|
||||
"conversation_mode": {
|
||||
"kind": "primary_assistant"
|
||||
},
|
||||
"force_paragen": False,
|
||||
"force_rate_limit": False
|
||||
}
|
||||
logger.debug('KEY_FOR_GPTS_INFO Or Request Model is not accessible')
|
||||
|
||||
else:
|
||||
payload = generate_gpts_payload(model, formatted_messages)
|
||||
@@ -1122,7 +1145,7 @@ def replace_sandbox(text, conversation_id, message_id, api_key):
|
||||
sandbox_path = match.group(1)
|
||||
download_url = get_download_url(conversation_id, message_id, sandbox_path)
|
||||
if download_url == None:
|
||||
return "\n```\nError: 沙箱文件下载失败,这可能是因为您启用了隐私模式\n```"
|
||||
return "\n```\nError: 沙箱文件下载失败,这可能是因为您的帐号启用了隐私模式\n```"
|
||||
file_name = extract_filename(download_url)
|
||||
timestamped_file_name = timestamp_filename(file_name)
|
||||
if USE_OAIUSERCONTENT_URL == False:
|
||||
@@ -1190,7 +1213,7 @@ def generate_actions_allow_payload(author_role, author_name, target_message_id,
|
||||
"action": "next",
|
||||
"messages": [
|
||||
{
|
||||
"id": generate_custom_uuid_v4(),
|
||||
"id": str(uuid.uuid4()),
|
||||
"author": {
|
||||
"role": author_role,
|
||||
"name": author_name
|
||||
@@ -2041,7 +2064,7 @@ def old_data_fetcher(upstream_response, data_queue, stop_event, last_data_time,
|
||||
"id": chat_message_id,
|
||||
"object": "chat.completion.chunk",
|
||||
"created": timestamp,
|
||||
"model": message.get("metadata", {}).get("model_slug"),
|
||||
"model": model,
|
||||
"choices": [
|
||||
{
|
||||
"index": 0,
|
||||
@@ -2279,6 +2302,8 @@ def chat_completions():
|
||||
accessible_model_list = get_accessible_model_list()
|
||||
if model not in accessible_model_list and not 'gpt-4-gizmo-' in model:
|
||||
return jsonify({"error": "model is not accessible"}), 401
|
||||
elif 'gpt-4-gizmo-' in model and not KEY_FOR_GPTS_INFO:
|
||||
return jsonify({"error": "key_for_gpts_info is not accessible"}), 400
|
||||
|
||||
stream = data.get('stream', False)
|
||||
|
||||
@@ -2418,7 +2443,7 @@ def chat_completions():
|
||||
}
|
||||
],
|
||||
"usage": {
|
||||
# 这里的 token 计数需要根据实际情况计算
|
||||
# 这里的 token 计数需要根据实际情况计算
|
||||
"prompt_tokens": input_tokens,
|
||||
"completion_tokens": comp_tokens,
|
||||
"total_tokens": input_tokens + comp_tokens
|
||||
@@ -2441,6 +2466,8 @@ def images_generations():
|
||||
accessible_model_list = get_accessible_model_list()
|
||||
if model not in accessible_model_list and not 'gpt-4-gizmo-' in model:
|
||||
return jsonify({"error": "model is not accessible"}), 401
|
||||
elif 'gpt-4-gizmo-' in model and not KEY_FOR_GPTS_INFO:
|
||||
return jsonify({"error": "key_for_gpts_info is not accessible"}), 400
|
||||
|
||||
prompt = data.get('prompt', '')
|
||||
|
||||
@@ -2672,4 +2699,4 @@ scheduler.add_job(id='updateRefresh_run', func=updateRefresh_dict, trigger='cron
|
||||
|
||||
# 运行 Flask 应用
|
||||
if __name__ == '__main__':
|
||||
app.run(host='0.0.0.0')
|
||||
app.run(host='0.0.0.0', port=33333, threaded=True)
|
||||
|
||||
3
start.sh
3
start.sh
@@ -31,4 +31,5 @@ echo "PROCESS_WORKERS: ${PROCESS_WORKERS}"
|
||||
echo "PROCESS_THREADS: ${PROCESS_THREADS}"
|
||||
|
||||
# 启动 Gunicorn 并使用 tee 命令同时输出日志到文件和控制台
|
||||
exec gunicorn -w ${PROCESS_WORKERS} --threads ${PROCESS_THREADS} --bind 0.0.0.0:33333 main:app --access-logfile - --error-logfile -
|
||||
exec gunicorn -w ${PROCESS_WORKERS} --threads ${PROCESS_THREADS} --bind 0.0.0.0:33333 main:app --access-logfile - --error-logfile - --timeout 60
|
||||
|
||||
|
||||
Reference in New Issue
Block a user