16 Commits

Author SHA1 Message Date
Vinlic
b7946835a8 Release 0.0.18 2024-04-01 15:46:48 +08:00
Vinlic
4a3168845e Release 0.0.17 2024-04-01 15:46:25 +08:00
Vinlic
ae541f533e 处理有几率遇到�输出的情况 2024-04-01 15:46:12 +08:00
Vinlic
980b506e94 update README 2024-03-31 04:25:02 +08:00
Vinlic
f7b6a9e64a update README 2024-03-31 03:46:36 +08:00
Vinlic
b71e8d4b24 update README 2024-03-29 12:01:24 +08:00
Vinlic
f9daf10455 update README 2024-03-29 11:29:00 +08:00
Vinlic
a387e133fb update README 2024-03-25 04:17:04 +08:00
Vinlic科技
c6e6c7e660 Merge pull request #19 from khazic/master
nb vlao
2024-03-21 19:22:18 +08:00
khazic
ff54eb3ebb nb 2024-03-21 19:16:14 +08:00
Vinlic科技
eccce82ade Merge pull request #16 from peanut996/master
添加health check api
2024-03-20 13:47:06 +08:00
peanut996
4fe9b654f5 添加health check api 2024-03-20 13:46:21 +08:00
Vinlic
7cbebf780c update README 2024-03-20 01:46:22 +08:00
Vinlic
909796bd91 支持模型名称包含silent_search来静默搜索不输出搜索过程 2024-03-20 01:37:25 +08:00
Vinlic
b8134a64a5 Release 0.0.15 2024-03-19 15:56:49 +08:00
Vinlic
c9b3574b0b 增加伪装请求,企图降低封号几率 2024-03-19 15:56:27 +08:00
7 changed files with 121 additions and 45 deletions

View File

@@ -9,9 +9,19 @@
与ChatGPT接口完全兼容。
还有以下四个free-api欢迎关注
阶跃星辰 (跃问StepChat) 接口转API [step-free-api](https://github.com/LLM-Red-Team/step-free-api)
阿里通义 (Qwen) 接口转API [qwen-free-api](https://github.com/LLM-Red-Team/qwen-free-api)
ZhipuAI (智谱清言) 接口转API [glm-free-api](https://github.com/LLM-Red-Team/glm-free-api)
聆心智能 (Emohaa) 接口转API [emohaa-free-api](https://github.com/LLM-Red-Team/emohaa-free-api)
## 目录
* [声明](#声明)
* [免责声明](#免责声明)
* [在线体验](#在线体验)
* [效果示例](#效果示例)
* [接入准备](#接入准备)
@@ -26,13 +36,15 @@
* [注意事项](#注意事项)
* [Nginx反代优化](#Nginx反代优化)
## 声明
## 免责声明
仅限自用,禁止对外提供服务或商用,避免对官方造成服务压力,否则风险自担!
**本组织和个人不接受任何资金捐助和交易,此项目是纯粹研究交流学习性质!**
仅限自用,禁止对外提供服务或商用,避免对官方造成服务压力,否则风险自担!
**仅限自用,禁止对外提供服务或商用,避免对官方造成服务压力,否则风险自担!**
仅限自用,禁止对外提供服务或商用,避免对官方造成服务压力,否则风险自担!
**仅限自用,禁止对外提供服务或商用,避免对官方造成服务压力,否则风险自担!**
**仅限自用,禁止对外提供服务或商用,避免对官方造成服务压力,否则风险自担!**
## 在线体验
@@ -66,10 +78,6 @@ https://udify.app/chat/Po0F6BMJ15q5vu2P
![响应流畅度一致](https://github.com/LLM-Red-Team/kimi-free-api/assets/20235341/48c7ec00-2b03-46c4-95d0-452d3075219b)
### 100线程并发测试
![100线程并发测试](./doc/example-7.jpg)
## 接入准备
从 [kimi.moonshot.cn](https://kimi.moonshot.cn) 获取refresh_token
@@ -84,7 +92,7 @@ https://udify.app/chat/Po0F6BMJ15q5vu2P
### 多账号接入
目前kimi限制普通账号每3小时内只能进行30轮长文本的问答你可以通过提供多个账号的refresh_token并使用`,`拼接提供:
目前kimi限制普通账号每3小时内只能进行30轮长文本的问答(短文本不限)你可以通过提供多个账号的refresh_token并使用`,`拼接提供:
`Authorization: Bearer TOKEN1,TOKEN2,TOKEN3`
@@ -201,6 +209,8 @@ Authorization: Bearer [refresh_token]
请求数据:
```json
{
// 模型名称随意填写如果不希望输出检索过程模型名称请包含silent_search
"model": "kimi",
"messages": [
{
"role": "user",
@@ -254,6 +264,8 @@ Authorization: Bearer [refresh_token]
请求数据:
```json
{
// 模型名称随意填写如果不希望输出检索过程模型名称请包含silent_search
"model": "kimi",
"messages": [
{
"role": "user",
@@ -318,6 +330,8 @@ Authorization: Bearer [refresh_token]
请求数据:
```json
{
// 模型名称随意填写如果不希望输出检索过程模型名称请包含silent_search
"model": "kimi",
"messages": [
{
"role": "user",
@@ -386,4 +400,8 @@ keepalive_timeout 120;
### Token统计
由于推理侧不kimi-free-api因此token不可统计将以固定数字返回
由于推理侧不kimi-free-api因此token不可统计将以固定数字返回!!!!!
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=LLM-Red-Team/kimi-free-api&type=Date)](https://star-history.com/#LLM-Red-Team/kimi-free-api&Date)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 98 KiB

View File

@@ -1,6 +1,6 @@
{
"name": "kimi-free-api",
"version": "0.0.14",
"version": "0.0.18",
"description": "Kimi Free API Server",
"type": "module",
"main": "dist/index.js",

View File

@@ -24,7 +24,7 @@ const FAKE_HEADERS = {
'Accept-Encoding': 'gzip, deflate, br, zstd',
'Accept-Language': 'zh-CN,zh;q=0.9',
'Origin': 'https://kimi.moonshot.cn',
'Cookie': util.generateCookie(),
// 'Cookie': util.generateCookie(),
'R-Timezone': 'Asia/Shanghai',
'Sec-Ch-Ua': '"Chromium";v="122", "Not(A:Brand";v="24", "Google Chrome";v="122"',
'Sec-Ch-Ua-Mobile': '?0',
@@ -57,7 +57,7 @@ async function requestToken(refreshToken: string) {
const result = await axios.get('https://kimi.moonshot.cn/api/auth/token/refresh', {
headers: {
Authorization: `Bearer ${refreshToken}`,
Referer: 'https://kimi.moonshot.cn',
Referer: 'https://kimi.moonshot.cn/',
...FAKE_HEADERS
},
timeout: 15000,
@@ -74,7 +74,7 @@ async function requestToken(refreshToken: string) {
}
})()
.then(result => {
if(accessTokenRequestQueueMap[refreshToken]) {
if (accessTokenRequestQueueMap[refreshToken]) {
accessTokenRequestQueueMap[refreshToken].forEach(resolve => resolve(result));
delete accessTokenRequestQueueMap[refreshToken];
}
@@ -82,13 +82,13 @@ async function requestToken(refreshToken: string) {
return result;
})
.catch(err => {
if(accessTokenRequestQueueMap[refreshToken]) {
if (accessTokenRequestQueueMap[refreshToken]) {
accessTokenRequestQueueMap[refreshToken].forEach(resolve => resolve(err));
delete accessTokenRequestQueueMap[refreshToken];
}
return err;
});
if(_.isError(result))
if (_.isError(result))
throw result;
return result;
}
@@ -128,7 +128,7 @@ async function createConversation(name: string, refreshToken: string) {
}, {
headers: {
Authorization: `Bearer ${token}`,
Referer: 'https://kimi.moonshot.cn',
Referer: 'https://kimi.moonshot.cn/',
...FAKE_HEADERS
},
timeout: 15000,
@@ -164,12 +164,13 @@ async function removeConversation(convId: string, refreshToken: string) {
/**
* 同步对话补全
*
* @param model 模型名称
* @param messages 参考gpt系列消息格式多轮对话请完整提供上下文
* @param refreshToken 用于刷新access_token的refresh_token
* @param useSearch 是否开启联网搜索
* @param retryCount 重试次数
*/
async function createCompletion(messages: any[], refreshToken: string, useSearch = true, retryCount = 0) {
async function createCompletion(model = MODEL_NAME, messages: any[], refreshToken: string, useSearch = true, retryCount = 0) {
return (async () => {
logger.info(messages);
@@ -177,6 +178,10 @@ async function createCompletion(messages: any[], refreshToken: string, useSearch
const refFileUrls = extractRefFileUrls(messages);
const refs = refFileUrls.length ? await Promise.all(refFileUrls.map(fileUrl => uploadFile(fileUrl, refreshToken))) : [];
// 伪装调用获取用户信息
fakeRequest(refreshToken)
.catch(err => logger.error(err));
// 创建会话
const convId = await createConversation(`cmpl-${util.uuid(false)}`, refreshToken);
@@ -200,7 +205,7 @@ async function createCompletion(messages: any[], refreshToken: string, useSearch
const streamStartTime = util.timestamp();
// 接收流为输出文本
const answer = await receiveStream(convId, result.data);
const answer = await receiveStream(model, convId, result.data);
logger.success(`Stream has completed transfer ${util.timestamp() - streamStartTime}ms`);
// 异步移除会话,如果消息不合规,此操作可能会抛出数据库错误异常,请忽略
@@ -210,12 +215,12 @@ async function createCompletion(messages: any[], refreshToken: string, useSearch
return answer;
})()
.catch(err => {
if(retryCount < MAX_RETRY_COUNT) {
if (retryCount < MAX_RETRY_COUNT) {
logger.error(`Stream response error: ${err.message}`);
logger.warn(`Try again after ${RETRY_DELAY / 1000}s...`);
return (async () => {
await new Promise(resolve => setTimeout(resolve, RETRY_DELAY));
return createCompletion(messages, refreshToken, useSearch, retryCount + 1);
return createCompletion(model, messages, refreshToken, useSearch, retryCount + 1);
})();
}
throw err;
@@ -225,12 +230,13 @@ async function createCompletion(messages: any[], refreshToken: string, useSearch
/**
* 流式对话补全
*
* @param model 模型名称
* @param messages 参考gpt系列消息格式多轮对话请完整提供上下文
* @param refreshToken 用于刷新access_token的refresh_token
* @param useSearch 是否开启联网搜索
* @param retryCount 重试次数
*/
async function createCompletionStream(messages: any[], refreshToken: string, useSearch = true, retryCount = 0) {
async function createCompletionStream(model = MODEL_NAME, messages: any[], refreshToken: string, useSearch = true, retryCount = 0) {
return (async () => {
logger.info(messages);
@@ -238,6 +244,10 @@ async function createCompletionStream(messages: any[], refreshToken: string, use
const refFileUrls = extractRefFileUrls(messages);
const refs = refFileUrls.length ? await Promise.all(refFileUrls.map(fileUrl => uploadFile(fileUrl, refreshToken))) : [];
// 伪装调用获取用户信息
fakeRequest(refreshToken)
.catch(err => logger.error(err));
// 创建会话
const convId = await createConversation(`cmpl-${util.uuid(false)}`, refreshToken);
@@ -260,7 +270,7 @@ async function createCompletionStream(messages: any[], refreshToken: string, use
});
const streamStartTime = util.timestamp();
// 创建转换流将消息格式转换为gpt兼容格式
return createTransStream(convId, result.data, () => {
return createTransStream(model, convId, result.data, () => {
logger.success(`Stream has completed transfer ${util.timestamp() - streamStartTime}ms`);
// 流传输结束后异步移除会话,如果消息不合规,此操作可能会抛出数据库错误异常,请忽略
removeConversation(convId, refreshToken)
@@ -268,18 +278,50 @@ async function createCompletionStream(messages: any[], refreshToken: string, use
});
})()
.catch(err => {
if(retryCount < MAX_RETRY_COUNT) {
if (retryCount < MAX_RETRY_COUNT) {
logger.error(`Stream response error: ${err.message}`);
logger.warn(`Try again after ${RETRY_DELAY / 1000}s...`);
return (async () => {
await new Promise(resolve => setTimeout(resolve, RETRY_DELAY));
return createCompletionStream(messages, refreshToken, useSearch, retryCount + 1);
return createCompletionStream(model, messages, refreshToken, useSearch, retryCount + 1);
})();
}
throw err;
});
}
/**
* 调用一些接口伪装访问
*
* 随机挑一个
*
* @param refreshToken 用于刷新access_token的refresh_token
*/
async function fakeRequest(refreshToken: string) {
const token = await acquireToken(refreshToken);
const options = {
headers: {
Authorization: `Bearer ${token}`,
Referer: `https://kimi.moonshot.cn/`,
...FAKE_HEADERS
}
};
await [
() => axios.get('https://kimi.moonshot.cn/api/user', options),
() => axios.get('https://kimi.moonshot.cn/api/chat_1m/user/status', options),
() => axios.post('https://kimi.moonshot.cn/api/chat/list', {
offset: 0,
size: 50
}, options),
() => axios.post('https://kimi.moonshot.cn/api/show_case/list', {
offset: 0,
size: 4,
enable_cache: true,
order: "asc"
}, options)
][Math.floor(Math.random() * 4)]();
}
/**
* 提取消息中引用的文件URL
*
@@ -356,7 +398,7 @@ async function preSignUrl(filename: string, refreshToken: string) {
timeout: 15000,
headers: {
Authorization: `Bearer ${token}`,
Referer: `https://kimi.moonshot.cn`,
Referer: `https://kimi.moonshot.cn/`,
...FAKE_HEADERS
},
validateStatus: () => true
@@ -437,7 +479,7 @@ async function uploadFile(fileUrl: string, refreshToken: string) {
headers: {
'Content-Type': mimeType,
Authorization: `Bearer ${token}`,
Referer: `https://kimi.moonshot.cn`,
Referer: `https://kimi.moonshot.cn/`,
...FAKE_HEADERS
},
validateStatus: () => true
@@ -453,7 +495,7 @@ async function uploadFile(fileUrl: string, refreshToken: string) {
}, {
headers: {
Authorization: `Bearer ${token}`,
Referer: `https://kimi.moonshot.cn`,
Referer: `https://kimi.moonshot.cn/`,
...FAKE_HEADERS
}
});
@@ -466,7 +508,7 @@ async function uploadFile(fileUrl: string, refreshToken: string) {
}, {
headers: {
Authorization: `Bearer ${token}`,
Referer: `https://kimi.moonshot.cn`,
Referer: `https://kimi.moonshot.cn/`,
...FAKE_HEADERS
}
});
@@ -501,15 +543,16 @@ function checkResult(result: AxiosResponse, refreshToken: string) {
/**
* 从流接收完整的消息内容
*
* @param model 模型名称
* @param convId 会话ID
* @param stream 消息流
*/
async function receiveStream(convId: string, stream: any) {
async function receiveStream(model: string, convId: string, stream: any) {
return new Promise((resolve, reject) => {
// 消息初始化
const data = {
id: convId,
model: MODEL_NAME,
model,
object: 'chat.completion',
choices: [
{ index: 0, message: { role: 'assistant', content: '' }, finish_reason: 'stop' }
@@ -518,6 +561,7 @@ async function receiveStream(convId: string, stream: any) {
created: util.unixTimestamp()
};
let refContent = '';
const silentSearch = model.indexOf('silent_search') != -1;
const parser = createParser(event => {
try {
if (event.type !== "event") return;
@@ -526,8 +570,9 @@ async function receiveStream(convId: string, stream: any) {
if (_.isError(result))
throw new Error(`Stream response invalid: ${event.data}`);
// 处理消息
if (result.event == 'cmpl') {
data.choices[0].message.content += result.text;
if (result.event == 'cmpl' && result.text) {
const exceptCharIndex = result.text.indexOf("<22>");
data.choices[0].message.content += result.text.substring(0, exceptCharIndex == -1 ? result.text.length : exceptCharIndex);
}
// 处理结束或错误
else if (result.event == 'all_done' || result.event == 'error') {
@@ -536,7 +581,7 @@ async function receiveStream(convId: string, stream: any) {
resolve(data);
}
// 处理联网搜索
else if (result.event == 'search_plus' && result.msg && result.msg.type == 'get_res')
else if (!silentSearch && result.event == 'search_plus' && result.msg && result.msg.type == 'get_res')
refContent += `${result.msg.title}(${result.msg.url})\n`;
// else
// logger.warn(result.event, result);
@@ -558,19 +603,21 @@ async function receiveStream(convId: string, stream: any) {
*
* 将流格式转换为gpt兼容流格式
*
* @param model 模型名称
* @param convId 会话ID
* @param stream 消息流
* @param endCallback 传输结束回调
*/
function createTransStream(convId: string, stream: any, endCallback?: Function) {
function createTransStream(model: string, convId: string, stream: any, endCallback?: Function) {
// 消息创建时间
const created = util.unixTimestamp();
// 创建转换流
const transStream = new PassThrough();
let searchFlag = false;
const silentSearch = model.indexOf('silent_search') != -1;
!transStream.closed && transStream.write(`data: ${JSON.stringify({
id: convId,
model: MODEL_NAME,
model,
object: 'chat.completion.chunk',
choices: [
{ index: 0, delta: { role: 'assistant', content: '' }, finish_reason: null }
@@ -586,12 +633,14 @@ function createTransStream(convId: string, stream: any, endCallback?: Function)
throw new Error(`Stream response invalid: ${event.data}`);
// 处理消息
if (result.event == 'cmpl') {
const exceptCharIndex = result.text.indexOf("<22>");
const chunk = result.text.substring(0, exceptCharIndex == -1 ? result.text.length : exceptCharIndex);
const data = `data: ${JSON.stringify({
id: convId,
model: MODEL_NAME,
model,
object: 'chat.completion.chunk',
choices: [
{ index: 0, delta: { content: (searchFlag ? '\n' : '') + result.text }, finish_reason: null }
{ index: 0, delta: { content: (searchFlag ? '\n' : '') + chunk }, finish_reason: null }
],
created
})}\n\n`;
@@ -603,7 +652,7 @@ function createTransStream(convId: string, stream: any, endCallback?: Function)
else if (result.event == 'all_done' || result.event == 'error') {
const data = `data: ${JSON.stringify({
id: convId,
model: MODEL_NAME,
model,
object: 'chat.completion.chunk',
choices: [
{
@@ -620,12 +669,12 @@ function createTransStream(convId: string, stream: any, endCallback?: Function)
endCallback && endCallback();
}
// 处理联网搜索
else if (result.event == 'search_plus' && result.msg && result.msg.type == 'get_res') {
else if (!silentSearch && result.event == 'search_plus' && result.msg && result.msg.type == 'get_res') {
if (!searchFlag)
searchFlag = true;
const data = `data: ${JSON.stringify({
id: convId,
model: MODEL_NAME,
model,
object: 'chat.completion.chunk',
choices: [
{

View File

@@ -19,15 +19,16 @@ export default {
const tokens = chat.tokenSplit(request.headers.authorization);
// 随机挑选一个refresh_token
const token = _.sample(tokens);
const model = request.body.model;
const messages = request.body.messages;
if (request.body.stream) {
const stream = await chat.createCompletionStream(request.body.messages, token, request.body.use_search);
const stream = await chat.createCompletionStream(model, messages, token, request.body.use_search);
return new Response(stream, {
type: "text/event-stream"
});
}
else
return await chat.createCompletion(messages, token, request.body.use_search);
return await chat.createCompletion(model, messages, token, request.body.use_search);
}
}

View File

@@ -1,5 +1,7 @@
import chat from "./chat.ts";
import ping from "./ping.ts";
export default [
chat
chat,
ping
];

6
src/api/routes/ping.ts Normal file
View File

@@ -0,0 +1,6 @@
export default {
prefix: '/ping',
get: {
'': async () => "pong"
}
}