50 Commits

Author SHA1 Message Date
Vinlic
cbf215d8a8 Merge branch 'master' of https://github.com/LLM-Red-Team/kimi-free-api 2024-04-17 12:24:11 +08:00
Vinlic
7c3bc3c0d8 Release 0.0.27 2024-04-17 12:18:41 +08:00
Vinlic
ae8e8316e4 优化检索引用连接展示,避免url解析错误 2024-04-17 12:18:17 +08:00
Vinlic科技
e1b7e55e70 Update README.md 2024-04-17 01:10:10 +08:00
Vinlic科技
e1710ee95a Update README.md 2024-04-17 01:00:05 +08:00
Vinlic
d14d062078 Release 0.0.26 2024-04-13 02:14:48 +08:00
Vinlic
1a3327cc8d 修复多轮对话下,无法重复唤起联网检索的问题 2024-04-13 02:14:28 +08:00
Vinlic科技
cfec318bd0 Merge pull request #56 from MichaelYuhe/master
docs: add deploy to Zeabur guide
2024-04-12 15:24:08 +08:00
Yuhang
1d18ac3f6b add deploy to Zeabur in Readme_en 2024-04-12 15:11:31 +08:00
Yuhang
b52e84bda0 add deploy to Zeabur in Readme 2024-04-12 15:10:35 +08:00
Vinlic
ee7cb9fdff Merge branch 'master' of https://github.com/Vinlic/kimi-free-api 2024-04-12 13:17:46 +08:00
Vinlic
a12a967202 update README 2024-04-12 13:17:23 +08:00
Vinlic科技
bff5623f73 update README 2024-04-11 18:53:03 +08:00
Vinlic
2d2454b65b update README 2024-04-11 15:03:04 +08:00
Vinlic
4642939835 update README 2024-04-11 14:28:32 +08:00
Vinlic
87593a270a 添加Render部署 2024-04-11 14:28:16 +08:00
Vinlic
ce89c29b05 添加Render部署 2024-04-11 14:27:27 +08:00
Vinlic
3bb36fbbf0 Merge branch 'master' of https://github.com/Vinlic/kimi-free-api 2024-04-11 13:55:22 +08:00
Vinlic
3b3584bf4f Release 0.0.25 2024-04-11 13:54:50 +08:00
Vinlic
d1e0fcad2b 支持vercel部署 2024-04-11 13:54:34 +08:00
Vinlic
674647e108 npm切换到yarn,加快容器构建 2024-04-11 13:54:16 +08:00
Vinlic科技
e244052c6a Merge pull request #54 from khazic/master
新增的
2024-04-11 10:32:52 +08:00
khazic
6ced4e76d2 新增的 2024-04-11 10:25:54 +08:00
Vinlic科技
4a1d39bdd8 Merge pull request #53 from khazic/master
readme 入口
2024-04-11 10:19:19 +08:00
khazic
2cc8c2e13d readme 入口 2024-04-11 10:15:05 +08:00
Vinlic科技
1ab9e980cf Merge pull request #52 from khazic/master
写了README_EN
2024-04-11 10:13:20 +08:00
khazic
97cc86f718 写了README_EN 2024-04-11 10:09:13 +08:00
Vinlic
d4f6fee14d update README 2024-04-10 18:31:52 +08:00
Vinlic
e157e40525 增加refresh_token存活检测 2024-04-10 18:22:00 +08:00
Vinlic
d08a4b2130 Release 0.0.24 2024-04-10 17:57:16 +08:00
Vinlic
31298c9566 update Dockerfile 2024-04-10 17:56:40 +08:00
Vinlic
fe63c20198 Release 0.0.23 2024-04-09 10:47:40 +08:00
Vinlic
72e29e4168 增加日志提醒错误请求地址 2024-04-09 10:47:28 +08:00
Vinlic
9fd7ae890b 首轮不注入注意力prompt 2024-04-08 22:26:05 +08:00
Vinlic
f5bea5ea68 Release 0.0.22 2024-04-08 22:24:13 +08:00
Vinlic
0b2c8434c9 首轮不注入注意力prompt 2024-04-08 22:23:54 +08:00
Vinlic
520f26f72f Release 0.0.21 2024-04-06 00:16:18 +08:00
Vinlic科技
462c64656e Merge pull request #42 from Yanyutin753/master
optimize code in messagesPrepare
2024-04-06 00:09:24 +08:00
Yanyutin753
cda36ed4fc fix the position of "\n" 2024-04-05 19:12:47 +08:00
Yanyutin753
70ea39591b optimize code in messagesPrepare 2024-04-05 18:54:04 +08:00
Vinlic
11a145924f 加大文件上传超时时间 2024-04-05 01:16:05 +08:00
Vinlic
1b2b7927ee Release 0.0.20 2024-04-03 00:00:46 +08:00
Vinlic
66cddd522b 修改日志输出和注意力注入prompt 2024-04-02 23:27:38 +08:00
Vinlic科技
ff59201961 Merge pull request #38 from Yanyutin753/transfer
优化降低传文件上下文混淆问题
2024-04-02 23:17:13 +08:00
Yanyutin753
6853087757 优化降低传文件上下文混淆问题 2024-04-02 23:13:00 +08:00
Yanyutin753
1e09d807e6 打印上传消息日志 2024-04-02 21:15:36 +08:00
Yanyutin753
66067b4dd9 通过添加prompt改善传文件时的上下文问题 2024-04-02 20:54:46 +08:00
Vinlic
1534fbc77a Release 0.0.19 2024-04-01 22:53:18 +08:00
Vinlic科技
1e55571b2d Merge pull request #37 from Yanyutin753/transfer
feat the context transfer files
2024-04-01 22:48:00 +08:00
Yanyutin753
4380d0c05c feat the context transfer files 2024-04-01 22:33:34 +08:00
13 changed files with 686 additions and 46 deletions

3
.gitignore vendored
View File

@@ -1,3 +1,4 @@
dist/
node_modules/
logs/
logs/
.vercel

View File

@@ -4,14 +4,15 @@ WORKDIR /app
COPY . /app
RUN npm i --registry http://registry.npmmirror.com && npm run build
RUN yarn install --registry https://registry.npmmirror.com/ && yarn run build
FROM node:lts-alpine
COPY --from=BUILD_IMAGE /app/configs ./configs
COPY --from=BUILD_IMAGE /app/package.json ./package.json
COPY --from=BUILD_IMAGE /app/dist ./dist
COPY --from=BUILD_IMAGE /app/node_modules ./node_modules
COPY --from=BUILD_IMAGE /app/public /app/public
COPY --from=BUILD_IMAGE /app/configs /app/configs
COPY --from=BUILD_IMAGE /app/package.json /app/package.json
COPY --from=BUILD_IMAGE /app/dist /app/dist
COPY --from=BUILD_IMAGE /app/node_modules /app/node_modules
WORKDIR /app

View File

@@ -1,5 +1,11 @@
# KIMI AI Free 服务
<hr>
<span>[ 中文 | <a href="README_EN.md">English</a> ]</span>
![](https://img.shields.io/github/license/llm-red-team/kimi-free-api.svg)
![](https://img.shields.io/github/stars/llm-red-team/kimi-free-api.svg)
![](https://img.shields.io/github/forks/llm-red-team/kimi-free-api.svg)
@@ -9,7 +15,7 @@
与ChatGPT接口完全兼容。
还有以下个free-api欢迎关注
还有以下个free-api欢迎关注
阶跃星辰 (跃问StepChat) 接口转API [step-free-api](https://github.com/LLM-Red-Team/step-free-api)
@@ -17,6 +23,8 @@
ZhipuAI (智谱清言) 接口转API [glm-free-api](https://github.com/LLM-Red-Team/glm-free-api)
秘塔AI (metaso) 接口转API [metaso-free-api](https://github.com/LLM-Red-Team/metaso-free-api)
聆心智能 (Emohaa) 接口转API [emohaa-free-api](https://github.com/LLM-Red-Team/emohaa-free-api)
## 目录
@@ -28,11 +36,14 @@ ZhipuAI (智谱清言) 接口转API [glm-free-api](https://github.com/LLM-Red-Te
* [多账号接入](#多账号接入)
* [Docker部署](#Docker部署)
* [Docker-compose部署](#Docker-compose部署)
* [Render部署](#Render部署)
* [Vercel部署](#Vercel部署)
* [原生部署](#原生部署)
* [接口列表](#接口列表)
* [对话补全](#对话补全)
* [文档解读](#文档解读)
* [图像解析](#图像解析)
* [refresh_token存活检测](#refresh_token存活检测)
* [注意事项](#注意事项)
* [Nginx反代优化](#Nginx反代优化)
@@ -54,23 +65,23 @@ https://udify.app/chat/Po0F6BMJ15q5vu2P
## 效果示例
### 验明正身
### 验明正身Demo
![验明正身](./doc/example-1.png)
### 多轮对话
### 多轮对话Demo
![多轮对话](./doc/example-6.png)
### 联网搜索
### 联网搜索Demo
![联网搜索](./doc/example-2.png)
### 长文档解读
### 长文档解读Demo
![长文档解读](./doc/example-5.png)
### 图像解析
### 图像解析Demo
![图像解析](./doc/example-3.png)
@@ -142,6 +153,39 @@ services:
- TZ=Asia/Shanghai
```
### Render部署
**注意部分部署区域可能无法连接kimi如容器日志出现请求超时或无法连接新加坡实测不可用请切换其他区域部署**
**注意免费账户的容器实例将在一段时间不活动时自动停止运行这会导致下次请求时遇到50秒或更长的延迟建议查看[Render容器保活](https://github.com/LLM-Red-Team/free-api-hub/#Render%E5%AE%B9%E5%99%A8%E4%BF%9D%E6%B4%BB)**
1. fork本项目到你的github账号下。
2. 访问 [Render](https://dashboard.render.com/) 并登录你的github账号。
3. 构建你的 Web ServiceNew+ -> Build and deploy from a Git repository -> Connect你fork的项目 -> 选择部署区域 -> 选择实例类型为Free -> Create Web Service
4. 等待构建完成后复制分配的域名并拼接URL访问即可。
### Vercel部署
**注意Vercel免费账户的请求响应超时时间为10秒但接口响应通常较久可能会遇到Vercel返回的504超时错误**
请先确保安装了Node.js环境。
```shell
npm i -g vercel --registry http://registry.npmmirror.com
vercel login
git clone https://github.com/LLM-Red-Team/kimi-free-api
cd kimi-free-api
vercel --prod
```
### Zeabur部署
**注意:免费账户的容器实例可能无法稳定运行**
[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/templates/GRFYBP)
## 原生部署
请准备一台具有公网IP的服务器并将8000端口开放。
@@ -379,6 +423,26 @@ Authorization: Bearer [refresh_token]
}
```
### refresh_token存活检测
检测refresh_token是否存活如果存活live未true否则为false请不要频繁小于10分钟调用此接口。
**POST /token/check**
请求数据:
```json
{
"token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9..."
}
```
响应数据:
```json
{
"live": true
}
```
## 注意事项
### Nginx反代优化
@@ -404,4 +468,4 @@ keepalive_timeout 120;
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=LLM-Red-Team/kimi-free-api&type=Date)](https://star-history.com/#LLM-Red-Team/kimi-free-api&Date)
[![Star History Chart](https://api.star-history.com/svg?repos=LLM-Red-Team/kimi-free-api&type=Date)](https://star-history.com/#LLM-Red-Team/kimi-free-api&Date)

434
README_EN.md Normal file
View File

@@ -0,0 +1,434 @@
# KIMI AI Free Service
![](https://img.shields.io/github/license/llm-red-team/kimi-free-api.svg)
![](https://img.shields.io/github/stars/llm-red-team/kimi-free-api.svg)
![](https://img.shields.io/github/forks/llm-red-team/kimi-free-api.svg)
![](https://img.shields.io/docker/pulls/vinlic/kimi-free-api.svg)
Supports high-speed streaming output, multi-turn dialogues, internet search, long document reading, image analysis, zero-configuration deployment, multi-token support, and automatic session trace cleanup.
Fully compatible with the ChatGPT interface.
Also, the following four free APIs are available for your attention:
Step to the Stars (StepChat) API to API [step-free-api](https://github.com/LLM-Red-Team/step-free-api)
Ali Tongyi (Qwen) API to API [qwen-free-api](https://github.com/LLM-Red-Team/qwen-free-api)
ZhipuAI (Wisdom Map Clear Words) API to API [glm-free-api](https://github.com/LLM-Red-Team/glm-free-api)
MetaAI (metaso) 接口转API [metaso-free-api](https://github.com/LLM-Red-Team/metaso-free-api)
Listening Intelligence (Emohaa) API to API [emohaa-free-api](https://github.com/LLM-Red-Team/emohaa-free-api)
## Table of Contents
* [Disclaimer](#disclaimer)
*[Online experience](#在线experience)
* [Effect Example](#EffectExample)
* [Access preparation](#access preparation)
* [Multiple account access](#multiple account access)
* [Docker Deployment](#DockerDeployment)
* [Docker-compose deployment](#Docker-compose deployment)
* [Native Deployment](#nativedeployment)
* [Interface List](#Interface List)
* [Dialogue completion](#dialogue completion)
* [Document Interpretation](#document interpretation)
* [Image analysis](#imageanalysis)
* [refresh_token survival detection](#refresh_token survival detection)
* [Note](# NOTE)
* [Nginx anti-generation optimization](#Nginx anti-generation optimization)
## Disclaimer
**This organization and individuals do not accept any financial donations and transactions. This project is purely for research, communication, and learning purposes!**
**For personal use only, it is forbidden to provide services or commercial use externally to avoid causing service pressure on the official, otherwise, bear the risk yourself!**
**For personal use only, it is forbidden to provide services or commercial use externally to avoid causing service pressure on the official, otherwise, bear the risk yourself!**
**For personal use only, it is forbidden to provide services or commercial use externally to avoid causing service pressure on the official, otherwise, bear the risk yourself!**
## Online Experience
This link is only for temporary testing of functions and cannot be used for a long time. For long-term use, please deploy by yourself.
https://udify.app/chat/Po0F6BMJ15q5vu2P
## Effect Examples
### Identity Verification
![Identity Verification](./doc/example-1.png)
### Multi-turn Dialogue
![Multi-turn Dialogue](./doc/example-6.png)
### Internet Search
![Internet Search](./doc/example-2.png)
### Long Document Reading
![Long Document Reading](./doc/example-5.png)
### Image Analysis
![Image Analysis](./doc/example-3.png)
### Consistent Responsiveness
![Consistent Responsiveness](https://github.com/LLM-Red-Team/kimi-free-api/assets/20235341/48c7ec00-2b03-46c4-95d0-452d3075219b)
## Access Preparation
Get the `refresh_token` from [kimi.moonshot.cn](https://kimi.moonshot.cn)
Start a conversation with kimi at will, then open the developer tool with F12, and find the value of `refresh_token` from Application > Local Storage, which will be used as the value of the Bearer Token in Authorization: `Authorization: Bearer TOKEN`
![example0](./doc/example-0.png)
If you see `refresh_token` as an array, please use `.` to join it before using.
![example8](./doc/example-8.jpg)
### Multi-Account Access
Currently, kimi limits ordinary accounts to only 30 rounds of long-text Q&A within every 3 hours (short text is unlimited). You can provide multiple account refresh_tokens and use `,` to join them:
`Authorization: Bearer TOKEN1,TOKEN2,TOKEN3`
The service will pick one each time a request is made.
## Docker Deployment
Please prepare a server with a public IP and open port 8000.
Pull the image and start the service
```shell
docker run -it -d --init --name kimi-free-api -p 8000:8000 -e TZ=Asia/Shanghai vinlic/kimi-free-api:latest
```
check real-time service logs
```shell
docker logs -f kimi-free-api
```
Restart service
```shell
docker restart kimi-free-api
```
Out of service
```shell
docker stop kimi-free-api
```
### Docker-compose deployment
```yaml
version: '3'
services:
kimi-free-api:
container_name: kimi-free-api
image: vinlic/kimi-free-api:latest
restart: always
ports:
- "8000:8000"
environment:
- TZ=Asia/Shanghai
```
## Native deployment
Please prepare a server with a public IP and open port 8000.
Please install the Node.js environment and configure the environment variables first, and confirm that the node command is available.
Install dependencies
```shell
npm i
```
Install PM2 for process guarding
```shell
npm i -g pm2
```
Compile and build. When you see the dist directory, the build is complete.
```shell
npm run build
```
Start service
```shell
pm2 start dist/index.js --name "kimi-free-api"
```
View real-time service logs
```shell
pm2 logs kimi-free-api
```
Restart service
```shell
pm2 reload kimi-free-api
```
Out of service
```shell
pm2 stop kimi-free-api
```
## Zeabur Deployment
[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/templates/GRFYBP)
## interface list
Currently, the `/v1/chat/completions` interface compatible with openai is supported. You can use the client access interface compatible with openai or other clients, or use online services such as [dify](https://dify.ai/) Access and use.
### Conversation completion
Conversation completion interface, compatible with openai's [chat-completions-api](https://platform.openai.com/docs/guides/text-generation/chat-completions-api).
**POST /v1/chat/completions**
The header needs to set the Authorization header:
```
Authorization: Bearer [refresh_token]
```
Request data:
```json
{
// Fill in the model name as you like. If you do not want to output the retrieval process model name, please include silent_search.
"model": "kimi",
"messages": [
{
"role": "user",
"content": "test"
}
],
// Whether to enable online search, default false
"use_search": true,
// If using SSE stream, please set it to true, the default is false
"stream": false
}
```
Response data:
```json
{
"id": "cnndivilnl96vah411dg",
"model": "kimi",
"object": "chat.completion",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I am Kimi, an artificial intelligence assistant developed by Dark Side of the Moon Technology Co., Ltd. I am good at conversation in Chinese and English. I can help you obtain information, answer questions, and read and understand the documents you provide. and web content. If you have any questions or need help, feel free to let me know!"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 1,
"completion_tokens": 1,
"total_tokens": 2
},
"created": 1710152062
}
```
### Document interpretation
Provide an accessible file URL or BASE64_URL to parse.
**POST /v1/chat/completions**
The header needs to set the Authorization header:
```
Authorization: Bearer [refresh_token]
```
Request data:
```json
{
// Fill in the model name as you like. If you do not want to output the retrieval process model name, please include silent_search.
"model": "kimi",
"messages": [
{
"role": "user",
"content": [
{
"type": "file",
"file_url": {
"url": "https://mj101-1317487292.cos.ap-shanghai.myqcloud.com/ai/test.pdf"
}
},
{
"type": "text",
"text": "What does the document say?"
}
]
}
],
// It is recommended to turn off online search to prevent interference in interpreting results.
"use_search": false
}
```
Response data:
```json
{
"id": "cnmuo7mcp7f9hjcmihn0",
"model": "kimi",
"object": "chat.completion",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The document contains several examples of ancient magical spells from magical texts from the ancient Greek and Roman periods known as PGM (Papyri Graecae Magicae). The following are examples of several spells mentioned in the document Contents:\n\n1. The first spell (PMG 4.1390 1495) describes a ritual that requires leaving some of your leftover bread, dividing it into seven small pieces, and then going to the heroes, gladiators, and those who died violent deaths The place where people were killed. Spell a spell on the piece of bread and throw it out, then pick up some contaminated soil from the ritual site and throw it into the home of the woman you like, then go to sleep. The content of the spell is to pray to the goddess of fate (Moirai), The Roman goddesses of Fates and the forces of nature (Daemons) were invoked to help make wishes come true.\n\n2. The second incantation (PMG 4.1342 57) was a summoning spell performed by speaking a series of mystical names and Words to summon a being called Daemon to cause a person named Tereous (born from Apia) to be mentally and emotionally tortured until she came to the spellcaster Didymos (born from Taipiam).\n \n3. The third spell (PGM 4.1265 74) mentions a mysterious name called NEPHERIĒRI, which is related to Aphrodite, the goddess of love. In order to win the heart of a beautiful woman, one needs to keep it for three days of purity, offer frankincense and recite the name while offering the offering. Then, as you approach the lady, recite the name silently seven times in your mind and do this for seven consecutive days with the hope of success.\n\n4. The fourth mantra ( PGM 4.1496 1) describes an incantation recited while burning myrrh. This incantation is a prayer to myrrh in the hope that it will attract a person named [name ] woman (her mother's name was [name]), making her unable to sit, eat, look at or kiss other people, but instead had only the caster in her mind until she came to the caster.\n\nThese Spells reflect ancient people's beliefs in magic and supernatural powers, and the ways in which they attempted to influence the emotions and behavior of others through these spells."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 1,
"completion_tokens": 1,
"total_tokens": 2
},
"created": 100920
}
```
### Image analysis
Provide an accessible image URL or BASE64_URL to parse.
This format is compatible with the [gpt-4-vision-preview](https://platform.openai.com/docs/guides/vision) API format. You can also use this format to transmit documents for parsing.
**POST /v1/chat/completions**
The header needs to set the Authorization header:
```
Authorization: Bearer [refresh_token]
```
Request data:
```json
{
// Fill in the model name as you like. If you do not want to output the retrieval process model name, please include silent_search.
"model": "kimi",
"messages": [
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://www.moonshot.cn/assets/logo/normal-dark.png"
}
},
{
"type": "text",
"text": "What does the image describe?"
}
]
}
],
// It is recommended to turn off online search to prevent interference in interpreting results.
"use_search": false
}
```
Response data:
```json
{
"id": "cnn6l8ilnl92l36tu8ag",
"model": "kimi",
"object": "chat.completion",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The image shows the words "Moonshot AI", which may be the logo or brand identity of Dark Side of the Moon Technology Co., Ltd. (Moonshot AI). Usually such images are used to represent a company or product and convey brand information .Since the image is in PNG format, it could be a logo with a transparent background, used on a website, app, or other visual material."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 1,
"completion_tokens": 1,
"total_tokens": 2
},
"created": 1710123627
}
```
### refresh_token survival detection
Check whether refresh_token is alive. If live is not true, otherwise it is false. Please do not call this interface frequently (less than 10 minutes).
**POST /token/check**
Request data:
```json
{
"token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9..."
}
```
Response data:
```json
{
"live": true
}
```
## Precautions
### Nginx anti-generation optimization
If you are using Nginx reverse proxy kimi-free-api, please add the following configuration items to optimize the output effect of the stream and optimize the experience.
```nginx
# Turn off proxy buffering. When set to off, Nginx will immediately send client requests to the backend server and immediately send responses received from the backend server back to the client.
proxy_buffering off;
# Enable chunked transfer encoding. Chunked transfer encoding allows servers to send data in chunks for dynamically generated content without knowing the size of the content in advance.
chunked_transfer_encoding on;
# Turn on TCP_NOPUSH, which tells Nginx to send as much data as possible before sending the packet to the client. This is usually used in conjunction with sendfile to improve network efficiency.
tcp_nopush on;
# Turn on TCP_NODELAY, which tells Nginx not to delay sending data and to send small data packets immediately. In some cases, this can reduce network latency.
tcp_nodelay on;
#Set the timeout to keep the connection, here it is set to 120 seconds. If there is no further communication between client and server during this time, the connection will be closed.
keepalive_timeout 120;
```
### Token statistics
Since the inference side is not in kimi-free-api, the token cannot be counted and will be returned as a fixed number!!!!!
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=LLM-Red-Team/kimi-free-api&type=Date)](https://star-history.com/ #LLM-Red-Team/kimi-free-api&Date)

View File

@@ -1,6 +1,6 @@
{
"name": "kimi-free-api",
"version": "0.0.18",
"version": "0.0.27",
"description": "Kimi Free API Server",
"type": "module",
"main": "dist/index.js",

10
public/welcome.html Normal file
View File

@@ -0,0 +1,10 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8"/>
<title>🚀 服务已启动</title>
</head>
<body>
<p>kimi-free-api已启动<br>请通过LobeChat / NextChat / Dify等客户端或OpenAI SDK接入</p>
</body>
</html>

View File

@@ -328,21 +328,27 @@ async function fakeRequest(refreshToken: string) {
* @param messages 参考gpt系列消息格式多轮对话请完整提供上下文
*/
function extractRefFileUrls(messages: any[]) {
return messages.reduce((urls, message) => {
if (_.isArray(message.content)) {
message.content.forEach(v => {
if (!_.isObject(v) || !['file', 'image_url'].includes(v['type']))
return;
// kimi-free-api支持格式
if (v['type'] == 'file' && _.isObject(v['file_url']) && _.isString(v['file_url']['url']))
urls.push(v['file_url']['url']);
// 兼容gpt-4-vision-preview API格式
else if (v['type'] == 'image_url' && _.isObject(v['image_url']) && _.isString(v['image_url']['url']))
urls.push(v['image_url']['url']);
});
}
const urls = [];
// 如果没有消息,则返回[]
if (!messages.length) {
return urls;
}, []);
}
// 只获取最新的消息
const lastMessage = messages[messages.length - 1];
if (_.isArray(lastMessage.content)) {
lastMessage.content.forEach(v => {
if (!_.isObject(v) || !['file', 'image_url'].includes(v['type']))
return;
// kimi-free-api支持格式
if (v['type'] == 'file' && _.isObject(v['file_url']) && _.isString(v['file_url']['url']))
urls.push(v['file_url']['url']);
// 兼容gpt-4-vision-preview API格式
else if (v['type'] == 'image_url' && _.isObject(v['image_url']) && _.isString(v['image_url']['url']))
urls.push(v['image_url']['url']);
});
}
logger.info("本次请求上传:" + urls.length + "个文件");
return urls;
}
/**
@@ -356,17 +362,39 @@ function extractRefFileUrls(messages: any[]) {
* @param messages 参考gpt系列消息格式多轮对话请完整提供上下文
*/
function messagesPrepare(messages: any[]) {
// 注入消息提升注意力
let latestMessage = messages[messages.length - 1];
let hasFileOrImage = Array.isArray(latestMessage.content)
&& latestMessage.content.some(v => (typeof v === 'object' && ['file', 'image_url'].includes(v['type'])));
// 第二轮开始注入system prompt
if (messages.length > 2) {
if (hasFileOrImage) {
let newFileMessage = {
"content": "关注用户最新发送文件和消息",
"role": "system"
};
messages.splice(messages.length - 1, 0, newFileMessage);
logger.info("注入提升尾部文件注意力system prompt");
} else {
let newTextMessage = {
"content": "关注用户最新的消息",
"role": "system"
};
messages.splice(messages.length - 1, 0, newTextMessage);
logger.info("注入提升尾部消息注意力system prompt");
}
}
const content = messages.reduce((content, message) => {
if (_.isArray(message.content)) {
if (Array.isArray(message.content)) {
return message.content.reduce((_content, v) => {
if (!_.isObject(v) || v['type'] != 'text')
return _content;
return _content + (v['text'] || '');
if (!_.isObject(v) || v['type'] != 'text') return _content;
return _content + `${message.role || "user"}:${v["text"] || ""}\n`;
}, content);
}
return content += `${message.role || 'user'}:${wrapUrlsToTags(message.content)}\n`;
return content += `${message.role || 'user'}:${message.role == 'user' ? wrapUrlsToTags(message.content) : message.content}\n`;
}, '');
logger.info("\n对话合并\n" + content);
return [
{ role: 'user', content }
]
@@ -474,8 +502,8 @@ async function uploadFile(fileUrl: string, refreshToken: string) {
data: fileData,
// 100M限制
maxBodyLength: FILE_MAX_SIZE,
// 60秒超时
timeout: 60000,
// 120秒超时
timeout: 120000,
headers: {
'Content-Type': mimeType,
Authorization: `Bearer ${token}`,
@@ -582,7 +610,7 @@ async function receiveStream(model: string, convId: string, stream: any) {
}
// 处理联网搜索
else if (!silentSearch && result.event == 'search_plus' && result.msg && result.msg.type == 'get_res')
refContent += `${result.msg.title}(${result.msg.url})\n`;
refContent += `${result.msg.title} - ${result.msg.url}\n`;
// else
// logger.warn(result.event, result);
}
@@ -679,7 +707,7 @@ function createTransStream(model: string, convId: string, stream: any, endCallba
choices: [
{
index: 0, delta: {
content: `检索 ${result.msg.title}(${result.msg.url}) ...\n`
content: `检索 ${result.msg.title} - ${result.msg.url} ...\n`
}, finish_reason: null
}
],
@@ -711,9 +739,35 @@ function tokenSplit(authorization: string) {
return authorization.replace('Bearer ', '').split(',');
}
/**
* 获取Token存活状态
*/
async function getTokenLiveStatus(refreshToken: string) {
const result = await axios.get('https://kimi.moonshot.cn/api/auth/token/refresh', {
headers: {
Authorization: `Bearer ${refreshToken}`,
Referer: 'https://kimi.moonshot.cn/',
...FAKE_HEADERS
},
timeout: 15000,
validateStatus: () => true
});
try {
const {
access_token,
refresh_token
} = checkResult(result, refreshToken);
return !!(access_token && refresh_token)
}
catch(err) {
return false;
}
}
export default {
createConversation,
createCompletion,
createCompletionStream,
getTokenLiveStatus,
tokenSplit
};

View File

@@ -1,7 +1,25 @@
import fs from 'fs-extra';
import Response from '@/lib/response/Response.ts';
import chat from "./chat.ts";
import ping from "./ping.ts";
import token from './token.ts';
export default [
{
get: {
'/': async () => {
const content = await fs.readFile('public/welcome.html');
return new Response(content, {
type: 'html',
headers: {
Expires: '-1'
}
});
}
}
},
chat,
ping
ping,
token
];

25
src/api/routes/token.ts Normal file
View File

@@ -0,0 +1,25 @@
import _ from 'lodash';
import Request from '@/lib/request/Request.ts';
import Response from '@/lib/response/Response.ts';
import chat from '@/api/controllers/chat.ts';
import logger from '@/lib/logger.ts';
export default {
prefix: '/token',
post: {
'/check': async (request: Request) => {
request
.validate('body.token', _.isString)
const live = await chat.getTokenLiveStatus(request.body.token);
return {
live
}
}
}
}

View File

@@ -9,13 +9,15 @@ import { format as dateFormat } from 'date-fns';
import config from './config.ts';
import util from './util.ts';
const isVercelEnv = process.env.VERCEL;
class LogWriter {
#buffers = [];
constructor() {
fs.ensureDirSync(config.system.logDirPath);
this.work();
!isVercelEnv && fs.ensureDirSync(config.system.logDirPath);
!isVercelEnv && this.work();
}
push(content) {
@@ -24,16 +26,16 @@ class LogWriter {
}
writeSync(buffer) {
fs.appendFileSync(path.join(config.system.logDirPath, `/${util.getDateString()}.log`), buffer);
!isVercelEnv && fs.appendFileSync(path.join(config.system.logDirPath, `/${util.getDateString()}.log`), buffer);
}
async write(buffer) {
await fs.appendFile(path.join(config.system.logDirPath, `/${util.getDateString()}.log`), buffer);
!isVercelEnv && await fs.appendFile(path.join(config.system.logDirPath, `/${util.getDateString()}.log`), buffer);
}
flush() {
if(!this.#buffers.length) return;
fs.appendFileSync(path.join(config.system.logDirPath, `/${util.getDateString()}.log`), Buffer.concat(this.#buffers));
!isVercelEnv && fs.appendFileSync(path.join(config.system.logDirPath, `/${util.getDateString()}.log`), Buffer.concat(this.#buffers));
}
work() {

View File

@@ -15,7 +15,7 @@ export default class FailureBody extends Body {
else if(error instanceof APIException || error instanceof Exception)
({ errcode, errmsg, data, httpStatusCode } = error);
else if(_.isError(error))
error = new Exception(EX.SYSTEM_ERROR, error.message);
({ errcode, errmsg, data, httpStatusCode } = new Exception(EX.SYSTEM_ERROR, error.message));
super({
code: errcode || -1,
message: errmsg || 'Internal error',

View File

@@ -73,7 +73,11 @@ class Server {
this.app.use((ctx: any) => {
const request = new Request(ctx);
logger.debug(`-> ${ctx.request.method} ${ctx.request.url} request is not supported - ${request.remoteIP || "unknown"}`);
const failureBody = new FailureBody(new Exception(EX.SYSTEM_NOT_ROUTE_MATCHING, "Request is not supported"));
// const failureBody = new FailureBody(new Exception(EX.SYSTEM_NOT_ROUTE_MATCHING, "Request is not supported"));
// const response = new Response(failureBody);
const message = `[请求有误]: 正确请求为 POST -> /v1/chat/completions当前请求为 ${ctx.request.method} -> ${ctx.request.url} 请纠正`;
logger.warn(message);
const failureBody = new FailureBody(new Error(message));
const response = new Response(failureBody);
response.injectTo(ctx);
if(config.system.requestLog)

27
vercel.json Normal file
View File

@@ -0,0 +1,27 @@
{
"builds": [
{
"src": "./dist/*.html",
"use": "@vercel/static"
},
{
"src": "./dist/index.js",
"use": "@vercel/node"
}
],
"routes": [
{
"src": "/",
"dest": "/dist/welcome.html"
},
{
"src": "/(.*)",
"dest": "/dist",
"headers": {
"Access-Control-Allow-Credentials": "true",
"Access-Control-Allow-Methods": "GET,OPTIONS,PATCH,DELETE,POST,PUT",
"Access-Control-Allow-Headers": "X-CSRF-Token, X-Requested-With, Accept, Accept-Version, Content-Length, Content-MD5, Content-Type, Date, X-Api-Version, Content-Type, Authorization"
}
}
]
}