mirror of
https://github.com/labring/FastGPT.git
synced 2025-07-23 21:13:50 +00:00
docs: update the framework of doc site (#207)
Signed-off-by: Carson Yang <yangchuansheng33@gmail.com>
This commit is contained in:
8
docSite/content/docs/installation/_index.md
Normal file
8
docSite/content/docs/installation/_index.md
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
weight: 700
|
||||
title: "私有化部署"
|
||||
description: "FastGPT 私有化部署文档"
|
||||
icon: menu_book
|
||||
draft: false
|
||||
images: []
|
||||
---
|
258
docSite/content/docs/installation/docker.md
Normal file
258
docSite/content/docs/installation/docker.md
Normal file
@@ -0,0 +1,258 @@
|
||||
---
|
||||
title: "Docker Compose 快速部署"
|
||||
description: "使用 Docker Compose 快速部署 FastGPT"
|
||||
icon: ""
|
||||
draft: false
|
||||
toc: true
|
||||
weight: 720
|
||||
---
|
||||
|
||||
## 准备条件
|
||||
|
||||
### 1. 准备好代理环境(国外服务器可忽略)
|
||||
|
||||
确保可以访问 OpenAI,具体方案可以参考:[Nginx 中转](/docs/installation/proxy/nginx/)
|
||||
|
||||
### 2. 多模型支持
|
||||
|
||||
推荐使用 one-api 项目来管理模型池,兼容 OpenAI 、Azure 和国内主流模型等。
|
||||
|
||||
具体部署方法可参考该项目的 [README](https://github.com/songquanpeng/one-api),也可以直接通过以下按钮一键部署:
|
||||
|
||||
[](https://cloud.sealos.io/?openapp=system-fastdeploy%3FtemplateName%3Done-api)
|
||||
|
||||
## 安装 Docker 和 docker-compose
|
||||
|
||||
{{< tabs tabTotal="3" >}}
|
||||
{{< tab tabName="Linux" >}}
|
||||
{{< markdownify >}}
|
||||
```bash
|
||||
# 安装 Docker
|
||||
curl -sSL https://get.daocloud.io/docker | sh
|
||||
systemctl enable --now docker
|
||||
# 安装 docker-compose
|
||||
curl -L https://github.com/docker/compose/releases/download/2.20.3/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
|
||||
chmod +x /usr/local/bin/docker-compose
|
||||
# 验证安装
|
||||
docker -v
|
||||
docker-compose -v
|
||||
```
|
||||
{{< /markdownify >}}
|
||||
{{< /tab >}}
|
||||
{{< tab tabName="MacOS" >}}
|
||||
{{< markdownify >}}
|
||||
推荐直接使用 [Orbstack](https://orbstack.dev/)。可直接通过 Homebrew 来安装:
|
||||
|
||||
```bash
|
||||
brew install orbstack
|
||||
```
|
||||
|
||||
或者直接[下载安装包](https://orbstack.dev/download)进行安装。
|
||||
{{< /markdownify >}}
|
||||
{{< /tab >}}
|
||||
{{< tab tabName="Windows" >}}
|
||||
{{< markdownify >}}
|
||||
我们建议将源代码和其他数据绑定到 Linux 容器中时,将其存储在 Linux 文件系统中,而不是 Windows 文件系统中。
|
||||
|
||||
可以选择直接[使用 WSL 2 后端在 Windows 中安装 Docker Desktop](https://docs.docker.com/desktop/wsl/)。
|
||||
|
||||
也可以直接[在 WSL 2 中安装命令行版本的 Docker](https://nickjanetakis.com/blog/install-docker-in-wsl-2-without-docker-desktop)。
|
||||
{{< /markdownify >}}
|
||||
{{< /tab >}}
|
||||
{{< /tabs >}}
|
||||
|
||||
## 创建 docker-compose.yml 文件
|
||||
|
||||
先创建一个目录(例如 fastgpt)并进入该目录:
|
||||
|
||||
```bash
|
||||
mkdir fastgpt
|
||||
cd fastgpt
|
||||
```
|
||||
|
||||
创建一个 docker-compose.yml 文件,粘贴下面的内容:
|
||||
|
||||
```yaml
|
||||
# 非 host 版本, 不使用本机代理
|
||||
version: '3.3'
|
||||
services:
|
||||
pg:
|
||||
image: ankane/pgvector:v0.4.2
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/pgvector:v0.4.2 # 阿里云
|
||||
container_name: pg
|
||||
restart: always
|
||||
ports: # 生产环境建议不要暴露
|
||||
- 5432:5432
|
||||
networks:
|
||||
- fastgpt
|
||||
environment:
|
||||
# 这里的配置只有首次运行生效。修改后,重启镜像是不会生效的。需要把持久化数据删除再重启,才有效果
|
||||
- POSTGRES_USER=username
|
||||
- POSTGRES_PASSWORD=password
|
||||
- POSTGRES_DB=postgres
|
||||
volumes:
|
||||
- ./pg/data:/var/lib/postgresql/data
|
||||
mongo:
|
||||
image: mongo:5.0.18
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/mongo:5.0.18 # 阿里云
|
||||
container_name: mongo
|
||||
restart: always
|
||||
ports: # 生产环境建议不要暴露
|
||||
- 27017:27017
|
||||
networks:
|
||||
- fastgpt
|
||||
environment:
|
||||
# 这里的配置只有首次运行生效。修改后,重启镜像是不会生效的。需要把持久化数据删除再重启,才有效果
|
||||
- MONGO_INITDB_ROOT_USERNAME=username
|
||||
- MONGO_INITDB_ROOT_PASSWORD=password
|
||||
volumes:
|
||||
- ./mongo/data:/data/db
|
||||
fastgpt:
|
||||
container_name: fastgpt
|
||||
image: ghcr.io/labring/fastgpt:latest # git
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:latest # 阿里云
|
||||
ports:
|
||||
- 3000:3000
|
||||
networks:
|
||||
- fastgpt
|
||||
depends_on:
|
||||
- mongo
|
||||
- pg
|
||||
restart: always
|
||||
environment:
|
||||
# root 密码,用户名为: root
|
||||
- DEFAULT_ROOT_PSW=1234
|
||||
# 中转地址,如果是用官方号,不需要管
|
||||
- OPENAI_BASE_URL=https://api.openai.com/v1
|
||||
- CHAT_API_KEY=sk-xxxx
|
||||
- DB_MAX_LINK=5 # database max link
|
||||
- TOKEN_KEY=any
|
||||
- ROOT_KEY=root_key
|
||||
# mongo 配置,不需要改
|
||||
- MONGODB_URI=mongodb://username:password@mongo:27017/?authSource=admin
|
||||
- MONGODB_NAME=fastgpt
|
||||
# pg配置
|
||||
- PG_HOST=pg
|
||||
- PG_PORT=5432
|
||||
- PG_USER=username
|
||||
- PG_PASSWORD=password
|
||||
- PG_DB_NAME=postgres
|
||||
networks:
|
||||
fastgpt:
|
||||
```
|
||||
|
||||
> 只需要改 fastgpt 容器的 3 个参数即可启动。
|
||||
|
||||
## 启动容器
|
||||
|
||||
```bash
|
||||
# 在 docker-compose.yml 同级目录下执行
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
## 访问 FastGPT
|
||||
|
||||
目前可以通过 `ip:3000`` 直接访问(注意防火墙)。登录用户名为 `root`,密码为刚刚环境变量里设置的 `DEFAULT_ROOT_PSW`。
|
||||
|
||||
如果需要域名访问,请自行安装并配置 Nginx。
|
||||
|
||||
## QA
|
||||
|
||||
### 如何更新?
|
||||
|
||||
执行 `docker-compose up -d` 会自动拉取最新镜像,一般情况下不需要执行额外操作。
|
||||
|
||||
### 如何自定义配置文件?
|
||||
|
||||
需要在 `docker-compose.yml` 同级目录创建一个 `config.json` 文件,内容如下:
|
||||
|
||||
```json
|
||||
{
|
||||
"FeConfig": {
|
||||
"show_emptyChat": true,
|
||||
"show_register": false,
|
||||
"show_appStore": false,
|
||||
"show_userDetail": false,
|
||||
"show_git": true,
|
||||
"systemTitle": "FastGPT",
|
||||
"authorText": "Made by FastGPT Team.",
|
||||
"gitLoginKey": "",
|
||||
"scripts": []
|
||||
},
|
||||
"SystemParams": {
|
||||
"gitLoginSecret": "",
|
||||
"vectorMaxProcess": 15,
|
||||
"qaMaxProcess": 15,
|
||||
"pgIvfflatProbe": 20
|
||||
},
|
||||
"plugins": {},
|
||||
"ChatModels": [
|
||||
{
|
||||
"model": "gpt-3.5-turbo",
|
||||
"name": "GPT35-4k",
|
||||
"contextMaxToken": 4000,
|
||||
"quoteMaxToken": 2000,
|
||||
"maxTemperature": 1.2,
|
||||
"price": 0,
|
||||
"defaultSystem": ""
|
||||
},
|
||||
{
|
||||
"model": "gpt-3.5-turbo-16k",
|
||||
"name": "GPT35-16k",
|
||||
"contextMaxToken": 16000,
|
||||
"quoteMaxToken": 8000,
|
||||
"maxTemperature": 1.2,
|
||||
"price": 0,
|
||||
"defaultSystem": ""
|
||||
},
|
||||
{
|
||||
"model": "gpt-4",
|
||||
"name": "GPT4-8k",
|
||||
"contextMaxToken": 8000,
|
||||
"quoteMaxToken": 4000,
|
||||
"maxTemperature": 1.2,
|
||||
"price": 0,
|
||||
"defaultSystem": ""
|
||||
}
|
||||
],
|
||||
"QAModels": [
|
||||
{
|
||||
"model": "gpt-3.5-turbo-16k",
|
||||
"name": "GPT35-16k",
|
||||
"maxToken": 16000,
|
||||
"price": 0
|
||||
}
|
||||
],
|
||||
"VectorModels": [
|
||||
{
|
||||
"model": "text-embedding-ada-002",
|
||||
"name": "Embedding-2",
|
||||
"price": 0
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
然后修改 `docker-compose.yml` 中的 `fastgpt` 容器内容,增加挂载选项即可:
|
||||
|
||||
```yaml
|
||||
fastgpt:
|
||||
container_name: fastgpt
|
||||
image: ghcr.io/labring/fastgpt:latest # github
|
||||
# image: registry.cn-hangzhou.aliyuncs.com/fastgpt/fastgpt:latest # 阿里云
|
||||
ports:
|
||||
- 3000:3000
|
||||
networks:
|
||||
- fastgpt
|
||||
depends_on:
|
||||
- mongo
|
||||
- pg
|
||||
restart: always
|
||||
environment:
|
||||
# root 密码,用户名为: root
|
||||
- DEFAULT_ROOT_PSW=1234
|
||||
volumes:
|
||||
- ./config.json:/app/data/config.json
|
||||
```
|
||||
|
||||
> 参考[配置详解](/docs/installation/reference/configuration/)
|
25
docSite/content/docs/installation/one-api.md
Normal file
25
docSite/content/docs/installation/one-api.md
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
title: "部署 one-api,实现多模型支持"
|
||||
description: "通过接入 one-api 来实现对各种大模型的支持"
|
||||
icon: "Api"
|
||||
draft: false
|
||||
toc: true
|
||||
weight: 730
|
||||
---
|
||||
|
||||
[one-api](https://github.com/songquanpeng/one-api) 是一个 OpenAI 接口管理 & 分发系统,可以通过标准的 OpenAI API 格式访问所有的大模型,开箱即用。
|
||||
|
||||
FastGPT 可以通过接入 one-api 来实现对各种大模型的支持。部署方法也很简单,直接点击以下按钮即可一键部署 👇
|
||||
|
||||
[](https://cloud.sealos.io/?openapp=system-fastdeploy%3FtemplateName%3Done-api)
|
||||
|
||||
部署完后会跳转「应用管理」,数据库在另一个应用「数据库」中。需要等待 1~3 分钟数据库运行后才能访问成功。
|
||||
|
||||
配置好 one-api 的模型后,可以直接修改 FastGPT 的环境变量:
|
||||
|
||||
```bash
|
||||
# 下面的地址是 Sealos 提供的,务必写上 v1
|
||||
OPENAI_BASE_URL=https://xxxx.cloud.sealos.io/v1
|
||||
# 下面的 key 由 one-api 提供
|
||||
CHAT_API_KEY=sk-xxxxxx
|
||||
```
|
8
docSite/content/docs/installation/proxy/_index.md
Normal file
8
docSite/content/docs/installation/proxy/_index.md
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
weight: 740
|
||||
title: "代理方案"
|
||||
description: "使用代理访问 OpenAI"
|
||||
icon: public
|
||||
draft: false
|
||||
images: []
|
||||
---
|
54
docSite/content/docs/installation/proxy/cloudflare.md
Normal file
54
docSite/content/docs/installation/proxy/cloudflare.md
Normal file
@@ -0,0 +1,54 @@
|
||||
---
|
||||
title: "Cloudflare Worker 中转"
|
||||
description: "使用 Cloudflare Worker 实现中转"
|
||||
icon: "foggy"
|
||||
draft: false
|
||||
toc: true
|
||||
weight: 742
|
||||
---
|
||||
|
||||
[参考 "不做了睡觉" 的教程](https://gravel-twister-d32.notion.site/FastGPT-API-ba7bb261d5fd4fd9bbb2f0607dacdc9e)
|
||||
|
||||
**workers 配置文件**
|
||||
|
||||
```js
|
||||
const TELEGRAPH_URL = 'https://api.openai.com';
|
||||
|
||||
addEventListener('fetch', (event) => {
|
||||
event.respondWith(handleRequest(event.request));
|
||||
});
|
||||
|
||||
async function handleRequest(request) {
|
||||
// 安全校验
|
||||
if (request.headers.get('auth') !== 'auth_code') {
|
||||
return new Response('UnAuthorization', { status: 403 });
|
||||
}
|
||||
|
||||
const url = new URL(request.url);
|
||||
url.host = TELEGRAPH_URL.replace(/^https?:\/\//, '');
|
||||
|
||||
const modifiedRequest = new Request(url.toString(), {
|
||||
headers: request.headers,
|
||||
method: request.method,
|
||||
body: request.body,
|
||||
redirect: 'follow'
|
||||
});
|
||||
|
||||
const response = await fetch(modifiedRequest);
|
||||
const modifiedResponse = new Response(response.body, response);
|
||||
|
||||
// 添加允许跨域访问的响应头
|
||||
modifiedResponse.headers.set('Access-Control-Allow-Origin', '*');
|
||||
|
||||
return modifiedResponse;
|
||||
}
|
||||
```
|
||||
|
||||
**修改 FastGPT 的环境变量**
|
||||
|
||||
> 务必别忘了填 v1!
|
||||
|
||||
```bash
|
||||
OPENAI_BASE_URL=https://xxxxxx/v1
|
||||
OPENAI_BASE_URL_AUTH=auth_code
|
||||
```
|
47
docSite/content/docs/installation/proxy/http_proxy.md
Normal file
47
docSite/content/docs/installation/proxy/http_proxy.md
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
title: "HTTP 代理中转"
|
||||
description: "使用 HTTP 代理实现中转"
|
||||
icon: "http"
|
||||
draft: false
|
||||
toc: true
|
||||
weight: 743
|
||||
---
|
||||
|
||||
如果你有代理工具(例如 [Clash](https://github.com/Dreamacro/clash) 或者 [sing-box](https://github.com/SagerNet/sing-box)),也可以使用 HTTP 代理来访问 OpenAI。只需要添加以下两个环境变量即可:
|
||||
|
||||
```bash
|
||||
AXIOS_PROXY_HOST=
|
||||
AXIOS_PROXY_PORT=
|
||||
```
|
||||
|
||||
以 Clash 为例,建议指定 `api.openai.com` 走代理,其他请求都直连。示例配置如下:
|
||||
|
||||
```yaml
|
||||
mixed-port: 7890
|
||||
allow-lan: false
|
||||
bind-address: '*'
|
||||
mode: rule
|
||||
log-level: warning
|
||||
dns:
|
||||
enable: true
|
||||
ipv6: false
|
||||
nameserver:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
cache-size: 400
|
||||
proxies:
|
||||
-
|
||||
proxy-groups:
|
||||
- { name: '♻️ 自动选择', type: url-test, proxies: [香港V01×1.5], url: 'https://api.openai.com', interval: 3600}
|
||||
rules:
|
||||
- 'DOMAIN-SUFFIX,api.openai.com,♻️ 自动选择'
|
||||
- 'MATCH,DIRECT'
|
||||
```
|
||||
|
||||
然后给 FastGPT 添加两个环境变量:
|
||||
|
||||
```bash
|
||||
AXIOS_PROXY_HOST=127.0.0.1
|
||||
AXIOS_PROXY_PORT=7890
|
||||
```
|
||||
|
105
docSite/content/docs/installation/proxy/nginx.md
Normal file
105
docSite/content/docs/installation/proxy/nginx.md
Normal file
@@ -0,0 +1,105 @@
|
||||
---
|
||||
title: "Nginx 中转"
|
||||
description: "使用 Sealos 部署 Nginx 实现中转"
|
||||
icon: "cloud_sync"
|
||||
draft: false
|
||||
toc: true
|
||||
weight: 741
|
||||
---
|
||||
|
||||
## 登录 Sealos
|
||||
|
||||
[Sealos](https://cloud.sealos.io/)
|
||||
|
||||
## 创建应用
|
||||
|
||||
打开 「应用管理」,点击「新建应用」:
|
||||
|
||||

|
||||

|
||||
|
||||
### 填写基本配置
|
||||
|
||||
务必开启外网访问,复制外网访问提供的地址。
|
||||
|
||||

|
||||
|
||||
### 添加配置文件
|
||||
|
||||
1. 复制下面这段配置文件,注意 `server_name` 后面的内容替换成第二步的外网访问地址。
|
||||
|
||||
```nginx
|
||||
user nginx;
|
||||
worker_processes auto;
|
||||
worker_rlimit_nofile 51200;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
resolver 8.8.8.8;
|
||||
proxy_ssl_server_name on;
|
||||
|
||||
access_log off;
|
||||
server_names_hash_bucket_size 512;
|
||||
client_header_buffer_size 64k;
|
||||
large_client_header_buffers 4 64k;
|
||||
client_max_body_size 50M;
|
||||
|
||||
proxy_connect_timeout 240s;
|
||||
proxy_read_timeout 240s;
|
||||
proxy_buffer_size 128k;
|
||||
proxy_buffers 4 256k;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name tgohwtdlrmer.cloud.sealos.io; # 这个地方替换成 Sealos 提供的外网地址
|
||||
|
||||
location ~ /openai/(.*) {
|
||||
proxy_pass https://api.openai.com/$1$is_args$args;
|
||||
proxy_set_header Host api.openai.com;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
# 如果响应是流式的
|
||||
proxy_set_header Connection '';
|
||||
proxy_http_version 1.1;
|
||||
chunked_transfer_encoding off;
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
# 如果响应是一般的
|
||||
proxy_buffer_size 128k;
|
||||
proxy_buffers 4 256k;
|
||||
proxy_busy_buffers_size 256k;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. 点开高级配置。
|
||||
3. 点击「新增配置文件」。
|
||||
4. 文件名写: `/etc/nginx/nginx.conf`。
|
||||
5. 文件值为刚刚复制的那段代码。
|
||||
6. 点击确认。
|
||||
|
||||

|
||||
|
||||
### 部署应用
|
||||
|
||||
填写完毕后,点击右上角的「部署」,即可完成部署。
|
||||
|
||||
## 修改 FastGPT 环境变量
|
||||
|
||||
1. 进入刚刚部署应用的详情,复制外网地址
|
||||
|
||||
> 注意:这是个 API 地址,点击打开是无效的。如需验证,可以访问: `*.cloud.sealos.io/openai/api`,如果提示 `Invalid URL (GET /api)` 则代表成功。
|
||||
|
||||

|
||||
|
||||
2. 修改环境变量(是 FastGPT 的环境变量,不是 Sealos 的):
|
||||
|
||||
```bash
|
||||
OPENAI_BASE_URL=https://tgohwtdlrmer.cloud.sealos.io/openai/v1
|
||||
```
|
||||
|
||||
**Done!**
|
8
docSite/content/docs/installation/reference/_index.md
Normal file
8
docSite/content/docs/installation/reference/_index.md
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
weight: 750
|
||||
title: "配置说明"
|
||||
description: "FastGPT 配置指南"
|
||||
icon: quick_reference_all
|
||||
draft: false
|
||||
images: []
|
||||
---
|
66
docSite/content/docs/installation/reference/chatglm2.md
Normal file
66
docSite/content/docs/installation/reference/chatglm2.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
title: "接入 ChatGLM2-6B"
|
||||
description: " 将 FastGPT 接入私有化模型 ChatGLM2-6B"
|
||||
icon: "model_training"
|
||||
draft: false
|
||||
toc: true
|
||||
weight: 753
|
||||
---
|
||||
|
||||
## 前言
|
||||
|
||||
FastGPT 允许你使用自己的 OpenAI API KEY 来快速调用 OpenAI 接口,目前集成了 GPT-3.5, GPT-4 和 embedding,可构建自己的知识库。但考虑到数据安全的问题,我们并不能将所有的数据都交付给云端大模型。
|
||||
|
||||
那么如何在 FastGPT 上接入私有化模型呢?本文就以清华的 ChatGLM2 为例,为各位讲解如何在 FastGPT 中接入私有化模型。
|
||||
|
||||
## ChatGLM2-6B 简介
|
||||
|
||||
ChatGLM2-6B 是开源中英双语对话模型 ChatGLM-6B 的第二代版本,具体介绍可参阅 [ChatGLM2-6B 项目主页](https://github.com/THUDM/ChatGLM2-6B)。
|
||||
|
||||
{{% alert context="warning" %}}
|
||||
注意,ChatGLM2-6B 权重对学术研究完全开放,在获得官方的书面许可后,亦允许商业使用。本教程只是介绍了一种用法,无权给予任何授权!
|
||||
{{% /alert %}}
|
||||
|
||||
## 推荐配置
|
||||
|
||||
依据官方数据,同样是生成 8192 长度,量化等级为 FP16 要占用 12.8GB 显存、int8 为 8.1GB 显存、int4 为 5.1GB 显存,量化后会稍微影响性能,但不多。
|
||||
|
||||
因此推荐配置如下:
|
||||
|
||||
{{< table "table-hover table-striped" >}}
|
||||
| 类型 | 内存 | 显存 | 硬盘空间 | 启动命令 |
|
||||
|------|---------|---------|----------|--------------------------|
|
||||
| fp16 | >=16GB | >=16GB | >=25GB | python openai_api.py 16 |
|
||||
| int8 | >=16GB | >=9GB | >=25GB | python openai_api.py 8 |
|
||||
| int4 | >=16GB | >=6GB | >=25GB | python openai_api.py 4 |
|
||||
{{< /table >}}
|
||||
|
||||
## 环境配置
|
||||
|
||||
+ Python 3.8.10
|
||||
+ CUDA 11.8
|
||||
+ 科学上网环境
|
||||
|
||||
## 部署步骤
|
||||
|
||||
1. 根据上面的环境配置配置好环境,具体教程自行 GPT;
|
||||
2. 在命令行输入命令 `pip install -r requirments.txt`;
|
||||
3. 打开你需要启动的 py 文件,在代码的第 76 行配置 token,这里的 token 只是加一层验证,防止接口被人盗用;
|
||||
4. 执行命令 `python openai_api.py 16`。这里的数字根据上面的配置进行选择。
|
||||
|
||||
然后等待模型下载,直到模型加载完毕为止。如果出现报错先问 GPT。
|
||||
|
||||
启动成功后应该会显示如下地址:
|
||||
|
||||

|
||||
|
||||
> 这里的 `http://0.0.0.0:6006` 就是连接地址。
|
||||
|
||||
然后现在回到 .env.local 文件,依照以下方式配置地址:
|
||||
|
||||
```bash
|
||||
OPENAI_BASE_URL=http://127.0.0.1:6006/v1
|
||||
OPENAIKEY=sk-aaabbbcccdddeeefffggghhhiiijjjkkk # 这里是你在代码中配置的 token,这里的 OPENAIKEY 可以任意填写
|
||||
```
|
||||
|
||||
这样就成功接入 ChatGLM2-6B 了。
|
117
docSite/content/docs/installation/reference/configuration.md
Normal file
117
docSite/content/docs/installation/reference/configuration.md
Normal file
@@ -0,0 +1,117 @@
|
||||
---
|
||||
title: "配置详解"
|
||||
description: "FastGPT 配置参数介绍"
|
||||
icon: "settings"
|
||||
draft: false
|
||||
toc: true
|
||||
weight: 751
|
||||
---
|
||||
|
||||
由于环境变量不利于配置复杂的内容,新版 FastGPT 采用了 ConfigMap 的形式挂载配置文件,你可以在 `client/data/config.json` 看到默认的配置文件。可以参考 [docker-compose 快速部署](/docs/installation/docker/) 来挂载配置文件。
|
||||
|
||||
开发环境下,你需要将示例配置文件 `config.json` 复制成 `config.local.json` 文件才会生效。
|
||||
|
||||
注意: 为了方便介绍,文档介绍里会把注释写到 json 文件,实际运行时候 json 文件不能包含注释。
|
||||
|
||||
这个配置文件中包含了前端页面定制、系统级参数、AI 对话的模型等……
|
||||
|
||||
{{% alert context="warning" %}}
|
||||
注意:下面的配置介绍仅是局部介绍,你需要完整挂载整个 `config.json`,不能仅挂载一部分。你可以直接在默认的 config.json 基础上根据下面的介绍进行修改。
|
||||
{{% /alert %}}
|
||||
|
||||
## 基础字段粗略说明
|
||||
|
||||
这里介绍一些基础的配置字段:
|
||||
|
||||
```json
|
||||
// 这个配置会控制前端的一些样式
|
||||
"FeConfig": {
|
||||
"show_emptyChat": true, // 对话页面,空内容时,是否展示介绍页
|
||||
"show_register": false, // 是否展示注册按键(包括忘记密码,注册账号和三方登录)
|
||||
"show_appStore": false, // 是否展示应用市场(不过目前权限还没做好,放开也没用)
|
||||
"show_userDetail": false, // 是否展示用户详情(账号余额、OpenAI 绑定)
|
||||
"show_git": true, // 是否展示 Git
|
||||
"systemTitle": "FastGPT", // 系统的 title
|
||||
"authorText": "Made by FastGPT Team.", // 签名
|
||||
"gitLoginKey": "" // Git 登录凭证
|
||||
},
|
||||
...
|
||||
...
|
||||
// 这个配置文件是系统级参数
|
||||
"SystemParams": {
|
||||
"gitLoginSecret": "", // Git 登录凭证
|
||||
"vectorMaxProcess": 15, // 向量生成最大进程,结合数据库性能和 key 来设置
|
||||
"qaMaxProcess": 15, // QA 生成最大进程,结合数据库性能和 key 来设置
|
||||
"pgIvfflatProbe": 20 // pg vector 搜索探针。没有设置索引前可忽略,通常 50w 组以上才需要设置。
|
||||
},
|
||||
...
|
||||
```
|
||||
|
||||
## 完整配置参数
|
||||
|
||||
```json
|
||||
{
|
||||
"FeConfig": {
|
||||
"show_emptyChat": true,
|
||||
"show_register": false,
|
||||
"show_appStore": false,
|
||||
"show_userDetail": false,
|
||||
"show_git": true,
|
||||
"systemTitle": "FastGPT",
|
||||
"authorText": "Made by FastGPT Team.",
|
||||
"gitLoginKey": "",
|
||||
"scripts": []
|
||||
},
|
||||
"SystemParams": {
|
||||
"gitLoginSecret": "",
|
||||
"vectorMaxProcess": 15,
|
||||
"qaMaxProcess": 15,
|
||||
"pgIvfflatProbe": 20
|
||||
},
|
||||
"plugins": {},
|
||||
"ChatModels": [
|
||||
{
|
||||
"model": "gpt-3.5-turbo",
|
||||
"name": "GPT35-4k",
|
||||
"contextMaxToken": 4000,
|
||||
"quoteMaxToken": 2000,
|
||||
"maxTemperature": 1.2,
|
||||
"price": 0,
|
||||
"defaultSystem": ""
|
||||
},
|
||||
{
|
||||
"model": "gpt-3.5-turbo-16k",
|
||||
"name": "GPT35-16k",
|
||||
"contextMaxToken": 16000,
|
||||
"quoteMaxToken": 8000,
|
||||
"maxTemperature": 1.2,
|
||||
"price": 0,
|
||||
"defaultSystem": ""
|
||||
},
|
||||
{
|
||||
"model": "gpt-4",
|
||||
"name": "GPT4-8k",
|
||||
"contextMaxToken": 8000,
|
||||
"quoteMaxToken": 4000,
|
||||
"maxTemperature": 1.2,
|
||||
"price": 0,
|
||||
"defaultSystem": ""
|
||||
}
|
||||
],
|
||||
"QAModels": [
|
||||
{
|
||||
"model": "gpt-3.5-turbo-16k",
|
||||
"name": "GPT35-16k",
|
||||
"maxToken": 16000,
|
||||
"price": 0
|
||||
}
|
||||
],
|
||||
"VectorModels": [
|
||||
{
|
||||
"model": "text-embedding-ada-002",
|
||||
"name": "Embedding-2",
|
||||
"price": 0
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
74
docSite/content/docs/installation/reference/models.md
Normal file
74
docSite/content/docs/installation/reference/models.md
Normal file
@@ -0,0 +1,74 @@
|
||||
---
|
||||
title: "多模型支持"
|
||||
description: "如何接入除了 GPT 以外的其他大模型"
|
||||
icon: "model_training"
|
||||
draft: false
|
||||
toc: true
|
||||
weight: 752
|
||||
---
|
||||
|
||||
默认情况下,FastGPT 只配置了 GPT 的 3 个模型,如果你需要接入其他模型,需要进行一些额外配置。
|
||||
|
||||
## 部署 one-api
|
||||
|
||||
首先你需要部署一个 [one-api](/docs/installation/one-api/),并添加对应的【渠道】
|
||||
|
||||

|
||||
|
||||
## 添加 FastGPT 配置
|
||||
|
||||
可以在 `/client/src/data/config.json` 里找到配置文件(本地开发需要复制成 config.local.json),配置文件中有一项是对话模型配置:
|
||||
|
||||
```json
|
||||
"ChatModels": [
|
||||
{
|
||||
"model": "gpt-3.5-turbo", // 这里的模型需要对应 OneAPI 的模型
|
||||
"name": "FastAI-4k", // 对外展示的名称
|
||||
"contextMaxToken": 4000, // 最大长下文 token,无论什么模型都按 GPT35 的计算。GPT 外的模型需要自行大致计算下这个值。可以调用官方接口去比对 Token 的倍率,然后在这里粗略计算。
|
||||
// 例如:文心一言的中英文 token 基本是 1:1,而 GPT 的中文 Token 是 2:1,如果文心一言官方最大 Token 是 4000,那么这里就可以填 8000,保险点就填 7000.
|
||||
"quoteMaxToken": 2000, // 引用知识库的最大 Token
|
||||
"maxTemperature": 1.2, // 最大温度
|
||||
"price": 1.5, // 1个token 价格 => 1.5 / 100000 * 1000 = 0.015元/1k token
|
||||
"defaultSystem": "" // 默认的系统提示词
|
||||
},
|
||||
{
|
||||
"model": "gpt-3.5-turbo-16k",
|
||||
"name": "FastAI-16k",
|
||||
"contextMaxToken": 16000,
|
||||
"quoteMaxToken": 8000,
|
||||
"maxTemperature": 1.2,
|
||||
"price": 3,
|
||||
"defaultSystem": ""
|
||||
},
|
||||
{
|
||||
"model": "gpt-4",
|
||||
"name": "FastAI-Plus",
|
||||
"contextMaxToken": 8000,
|
||||
"quoteMaxToken": 4000,
|
||||
"maxTemperature": 1.2,
|
||||
"price": 45,
|
||||
"defaultSystem": ""
|
||||
}
|
||||
],
|
||||
```
|
||||
|
||||
### 添加新模型
|
||||
|
||||
以添加文心一言为例:
|
||||
|
||||
```json
|
||||
"ChatModels": [
|
||||
...
|
||||
{
|
||||
"model": "ERNIE-Bot",
|
||||
"name": "文心一言",
|
||||
"contextMaxToken": 4000,
|
||||
"quoteMaxToken": 2000,
|
||||
"maxTemperature": 1,
|
||||
"price": 1.2
|
||||
}
|
||||
...
|
||||
]
|
||||
```
|
||||
|
||||
添加完后,重启应用即可在选择文心一言模型进行对话。
|
24
docSite/content/docs/installation/sealos.md
Normal file
24
docSite/content/docs/installation/sealos.md
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
title: "Sealos 一键部署"
|
||||
description: "使用 Sealos 一键部署 FastGPT"
|
||||
icon: "cloud"
|
||||
draft: false
|
||||
toc: true
|
||||
weight: 710
|
||||
---
|
||||
|
||||
Sealos 的服务器在国外,不需要额外处理网络问题,无需服务器、无需魔法、无需域名,支持高并发 & 动态伸缩。点击以下按钮即可一键部署 👇
|
||||
|
||||
[](https://cloud.sealos.io/?openapp=system-fastdeploy%3FtemplateName%3Dfastgpt)
|
||||
|
||||
由于需要部署数据库,部署完后需要等待 2~4 分钟才能正常访问。默认用了最低配置,首次访问时会有些慢。
|
||||
|
||||

|
||||
|
||||
点击 Sealos 提供的外网地址即可打开 FastGPT 的可视化界面。
|
||||
|
||||

|
||||
|
||||
> 用户名:`root`
|
||||
>
|
||||
> 密码就是刚刚一键部署时设置的环境变量
|
66
docSite/content/docs/installation/upgrading/40.md
Normal file
66
docSite/content/docs/installation/upgrading/40.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
title: "升级到 V4.0"
|
||||
description: "FastGPT 从旧版本升级到 V4.0 操作指南"
|
||||
icon: "upgrade"
|
||||
draft: false
|
||||
toc: true
|
||||
weight: 761
|
||||
---
|
||||
|
||||
如果您是**从旧版本升级到 V4**,由于新版 MongoDB 表变更比较大,需要按照本文档的说明执行一些初始化脚本。
|
||||
|
||||
## 重命名表名
|
||||
|
||||
需要连接上 MongoDB 数据库,执行两条命令:
|
||||
|
||||
```mongodb
|
||||
db.models.renameCollection("apps")
|
||||
db.sharechats.renameCollection("outlinks")
|
||||
```
|
||||
|
||||
{{% alert context="warning" %}}
|
||||
注意:从旧版更新到 V4, MongoDB 会自动创建空表,你需要先手动删除这两个空表,再执行上面的操作。
|
||||
{{% /alert %}}
|
||||
|
||||
## 初始化几个表中的字段
|
||||
|
||||
依次执行下面 3 条命令,时间比较长,不成功可以重复执行(会跳过已经初始化的数据),直到所有数据更新完成。
|
||||
|
||||
```mongodb
|
||||
db.chats.find({appId: {$exists: false}}).forEach(function(item){
|
||||
db.chats.updateOne(
|
||||
{
|
||||
_id: item._id,
|
||||
},
|
||||
{ "$set": {"appId":item.modelId}}
|
||||
)
|
||||
})
|
||||
|
||||
db.collections.find({appId: {$exists: false}}).forEach(function(item){
|
||||
db.collections.updateOne(
|
||||
{
|
||||
_id: item._id,
|
||||
},
|
||||
{ "$set": {"appId":item.modelId}}
|
||||
)
|
||||
})
|
||||
|
||||
db.outlinks.find({shareId: {$exists: false}}).forEach(function(item){
|
||||
db.outlinks.updateOne(
|
||||
{
|
||||
_id: item._id,
|
||||
},
|
||||
{ "$set": {"shareId":item._id.toString(),"appId":item.modelId}}
|
||||
)
|
||||
})
|
||||
```
|
||||
|
||||
## 初始化 API
|
||||
|
||||
部署新版项目,并发起 3 个 HTTP 请求(记得携带 `headers.rootkey`,这个值是环境变量里的)
|
||||
|
||||
1. https://xxxxx/api/admin/initv4
|
||||
2. https://xxxxx/api/admin/initChat
|
||||
3. https://xxxxx/api/admin/initOutlink
|
||||
|
||||
1 和 2 有可能会因为内存不足挂掉,可以重复执行。
|
27
docSite/content/docs/installation/upgrading/41.md
Normal file
27
docSite/content/docs/installation/upgrading/41.md
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
title: "升级到 V4.1"
|
||||
description: "FastGPT 从旧版本升级到 V4.1 操作指南"
|
||||
icon: "upgrade"
|
||||
draft: false
|
||||
toc: true
|
||||
weight: 762
|
||||
---
|
||||
|
||||
如果您是**从旧版本升级到 V4.1**,由于新版重新设置了对话存储结构,需要初始化原来的存储内容。
|
||||
|
||||
## 更新环境变量
|
||||
|
||||
V4.1 优化了 PostgreSQL 和 MongoDB 的连接变量,只需要填 1 个 URL 即可:
|
||||
|
||||
```bash
|
||||
# mongo 配置,不需要改. 如果连不上,可能需要去掉 ?authSource=admin
|
||||
- MONGODB_URI=mongodb://username:password@mongo:27017/fastgpt?authSource=admin
|
||||
# pg配置. 不需要改
|
||||
- PG_URL=postgresql://username:password@pg:5432/postgres
|
||||
```
|
||||
|
||||
## 初始化 API
|
||||
|
||||
部署新版项目,并发起 1 个 HTTP 请求(记得携带 `headers.rootkey`,这个值是环境变量里的)
|
||||
|
||||
+ https://xxxxx/api/admin/initChatItem
|
8
docSite/content/docs/installation/upgrading/_index.md
Normal file
8
docSite/content/docs/installation/upgrading/_index.md
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
weight: 760
|
||||
title: "版本升级"
|
||||
description: "FastGPT 升级指南"
|
||||
icon: upgrade
|
||||
draft: false
|
||||
images: []
|
||||
---
|
Reference in New Issue
Block a user