mirror of
https://github.com/LLM-Red-Team/glm-free-api.git
synced 2025-07-18 10:13:44 +00:00
跟进官网反逆向策略并支持最新推理模型和沉思模型
This commit is contained in:
59
README.md
59
README.md
@@ -9,7 +9,7 @@
|
||||

|
||||

|
||||
|
||||
支持GLM-4-Plus高速流式输出、支持多轮对话、支持智能体对话、支持Zero思考推理模型、支持视频生成、支持AI绘图、支持联网搜索、支持长文档解读、支持图像解析,零配置部署,多路token支持,自动清理会话痕迹。
|
||||
支持GLM-4-Plus高速流式输出、支持多轮对话、支持智能体对话、支持沉思模型、支持Zero思考推理模型、支持视频生成、支持AI绘图、支持联网搜索、支持长文档解读、支持图像解析,零配置部署,多路token支持,自动清理会话痕迹。
|
||||
|
||||
与ChatGPT接口完全兼容。
|
||||
|
||||
@@ -37,28 +37,40 @@ MiniMax(海螺AI)接口转API [hailuo-free-api](https://github.com/LLM-Red-T
|
||||
|
||||
## 目录
|
||||
|
||||
* [免责声明](#免责声明)
|
||||
* [效果示例](#效果示例)
|
||||
* [接入准备](#接入准备)
|
||||
* [智能体接入](#智能体接入)
|
||||
* [多账号接入](#多账号接入)
|
||||
* [Docker部署](#Docker部署)
|
||||
* [Docker-compose部署](#Docker-compose部署)
|
||||
* [Render部署](#Render部署)
|
||||
* [Vercel部署](#Vercel部署)
|
||||
* [原生部署](#原生部署)
|
||||
* [推荐使用客户端](#推荐使用客户端)
|
||||
* [接口列表](#接口列表)
|
||||
* [对话补全](#对话补全)
|
||||
* [视频生成](#视频生成)
|
||||
* [AI绘图](#AI绘图)
|
||||
* [文档解读](#文档解读)
|
||||
* [图像解析](#图像解析)
|
||||
* [refresh_token存活检测](#refresh_token存活检测)
|
||||
* [注意事项](#注意事项)
|
||||
* [Nginx反代优化](#Nginx反代优化)
|
||||
* [Token统计](#Token统计)
|
||||
* [Star History](#star-history)
|
||||
- [GLM AI Free 服务](#glm-ai-free-服务)
|
||||
- [目录](#目录)
|
||||
- [免责声明](#免责声明)
|
||||
- [效果示例](#效果示例)
|
||||
- [验明正身Demo](#验明正身demo)
|
||||
- [智能体对话Demo](#智能体对话demo)
|
||||
- [结合Dify工作流Demo](#结合dify工作流demo)
|
||||
- [多轮对话Demo](#多轮对话demo)
|
||||
- [视频生成Demo](#视频生成demo)
|
||||
- [AI绘图Demo](#ai绘图demo)
|
||||
- [联网搜索Demo](#联网搜索demo)
|
||||
- [长文档解读Demo](#长文档解读demo)
|
||||
- [代码调用Demo](#代码调用demo)
|
||||
- [图像解析Demo](#图像解析demo)
|
||||
- [接入准备](#接入准备)
|
||||
- [智能体接入](#智能体接入)
|
||||
- [多账号接入](#多账号接入)
|
||||
- [Docker部署](#docker部署)
|
||||
- [Docker-compose部署](#docker-compose部署)
|
||||
- [Render部署](#render部署)
|
||||
- [Vercel部署](#vercel部署)
|
||||
- [原生部署](#原生部署)
|
||||
- [推荐使用客户端](#推荐使用客户端)
|
||||
- [接口列表](#接口列表)
|
||||
- [对话补全](#对话补全)
|
||||
- [视频生成](#视频生成)
|
||||
- [AI绘图](#ai绘图)
|
||||
- [文档解读](#文档解读)
|
||||
- [图像解析](#图像解析)
|
||||
- [refresh\_token存活检测](#refresh_token存活检测)
|
||||
- [注意事项](#注意事项)
|
||||
- [Nginx反代优化](#nginx反代优化)
|
||||
- [Token统计](#token统计)
|
||||
- [Star History](#star-history)
|
||||
|
||||
## 免责声明
|
||||
|
||||
@@ -288,6 +300,7 @@ Authorization: Bearer [refresh_token]
|
||||
{
|
||||
// 默认模型:glm-4-plus
|
||||
// zero思考推理模型:glm-4-zero / glm-4-think
|
||||
// 沉思模型:glm-4-deepresearch
|
||||
// 如果使用智能体请填写智能体ID到此处
|
||||
"model": "glm-4-plus",
|
||||
// 目前多轮对话基于消息合并实现,某些场景可能导致能力下降且受单轮最大token数限制
|
||||
|
61
README_EN.md
61
README_EN.md
@@ -5,7 +5,7 @@
|
||||

|
||||

|
||||
|
||||
Supports high-speed streaming output, multi-turn dialogues, internet search, long document reading, image analysis, zero-configuration deployment, multi-token support, and automatic session trace cleanup.
|
||||
Supports high-speed streaming output, multi-turn dialogues, internet search, long document reading, image analysis, deepresearch, zero-configuration deployment, multi-token support, and automatic session trace cleanup.
|
||||
|
||||
Fully compatible with the ChatGPT interface.
|
||||
|
||||
@@ -33,29 +33,41 @@ Lingxin Intelligence (Emohaa) API to API [emohaa-free-api](https://github.com/LL
|
||||
|
||||
## Table of Contents
|
||||
|
||||
* [Announcement](#Announcement)
|
||||
* [Online Experience](#Online-Experience)
|
||||
* [Effect Examples](#Effect-Examples)
|
||||
* [Access Preparation](#Access-Preparation)
|
||||
* [Agent Access](#Agent-Access)
|
||||
* [Multiple Account Access](#Multiple-Account-Access)
|
||||
* [Docker Deployment](#Docker-Deployment)
|
||||
* [Docker-compose Deployment](#Docker-compose-Deployment)
|
||||
* [Render Deployment](#Render-Deployment)
|
||||
* [Vercel Deployment](#Vercel-Deployment)
|
||||
* [Native Deployment](#Native-Deployment)
|
||||
* [Recommended Clients](#Recommended-Clients)
|
||||
* [Interface List](#Interface-List)
|
||||
* [Conversation Completion](#Conversation-Completion)
|
||||
* [Video Generation](#Video-Generation)
|
||||
* [AI Drawing](#AI-Drawing)
|
||||
* [Document Interpretation](#Document-Interpretation)
|
||||
* [Image Analysis](#Image-Analysis)
|
||||
* [Refresh_token Survival Detection](#Refresh_token-Survival-Detection)
|
||||
* [Notification](#Notification)
|
||||
* [Nginx Anti-generation Optimization](#Nginx-Anti-generation-Optimization)
|
||||
* [Token Statistics](#Token-Statistics)
|
||||
* [Star History](#star-history)
|
||||
- [GLM AI Free Service](#glm-ai-free-service)
|
||||
- [Table of Contents](#table-of-contents)
|
||||
- [Announcement](#announcement)
|
||||
- [Online Experience](#online-experience)
|
||||
- [Effect Examples](#effect-examples)
|
||||
- [Identity Verification](#identity-verification)
|
||||
- [AI-Agent](#ai-agent)
|
||||
- [Combined with Dify workflow](#combined-with-dify-workflow)
|
||||
- [Multi-turn Dialogue](#multi-turn-dialogue)
|
||||
- [Video Generation](#video-generation)
|
||||
- [AI Drawing](#ai-drawing)
|
||||
- [Internet Search](#internet-search)
|
||||
- [Long Document Reading](#long-document-reading)
|
||||
- [Using Code](#using-code)
|
||||
- [Image Analysis](#image-analysis)
|
||||
- [Access Preparation](#access-preparation)
|
||||
- [Agent Access](#agent-access)
|
||||
- [Multiple Account Access](#multiple-account-access)
|
||||
- [Docker Deployment](#docker-deployment)
|
||||
- [Docker-compose Deployment](#docker-compose-deployment)
|
||||
- [Render Deployment](#render-deployment)
|
||||
- [Vercel Deployment](#vercel-deployment)
|
||||
- [Native Deployment](#native-deployment)
|
||||
- [Recommended Clients](#recommended-clients)
|
||||
- [interface List](#interface-list)
|
||||
- [Conversation Completion](#conversation-completion)
|
||||
- [Video Generation](#video-generation-1)
|
||||
- [AI Drawing](#ai-drawing-1)
|
||||
- [Document Interpretation](#document-interpretation)
|
||||
- [Image Analysis](#image-analysis-1)
|
||||
- [Refresh\_token Survival Detection](#refresh_token-survival-detection)
|
||||
- [Notification](#notification)
|
||||
- [Nginx Anti-generation Optimization](#nginx-anti-generation-optimization)
|
||||
- [Token Statistics](#token-statistics)
|
||||
- [Star History](#star-history)
|
||||
|
||||
## Announcement
|
||||
|
||||
@@ -291,6 +303,7 @@ Request data:
|
||||
{
|
||||
// Default model: glm-4-plus
|
||||
// zero thinking model: glm-4-zero / glm-4-think
|
||||
// deepresearch model: glm-4-deepresearch
|
||||
// If using the Agent, fill in the Agent ID here
|
||||
"model": "glm-4",
|
||||
// Currently, multi-round conversations are realized based on message merging, which in some scenarios may lead to capacity degradation and is limited by the maximum number of tokens in a single round.
|
||||
|
File diff suppressed because one or more lines are too long
@@ -3,10 +3,6 @@ import _ from 'lodash';
|
||||
import Request from '@/lib/request/Request.ts';
|
||||
import Response from '@/lib/response/Response.ts';
|
||||
import chat from '@/api/controllers/chat.ts';
|
||||
import logger from '@/lib/logger.ts';
|
||||
|
||||
// zero推理模型智能体ID
|
||||
const ZERO_ASSISTANT_ID = "676411c38945bbc58a905d31";
|
||||
|
||||
export default {
|
||||
|
||||
|
Reference in New Issue
Block a user