docs(faq): 更新常见问题文档并新增注意事项页面 (#6465)

* docs(faq): 更新常见问题文档并新增注意事项页面

- 在 error.mdx 中调整问题序号并新增 OCR 配置问题
- 新增 attention.mdx 页面,提供问题排查步骤和技术支持指引
- 重构 dataset.mdx 内容结构,使用数字序号并补充知识库闪烁问题

* docs: 重构私有部署故障排查文档结构并新增详细指南

将原有的 FAQ 文档拆分为多个专题文档,包括通用问题排查、S3问题、OneAPI错误、模型可用性问题和排查方法。更新了导航菜单和目录结构,使文档组织更清晰,便于用户快速定位和解决特定问题。

新增了详细的故障排查步骤、CURL测试示例和具体错误解决方案,特别是针对对象存储连接、签名错误和模型调用失败等常见问题提供了更全面的指导。

* docs: 移除已弃用的 OneAPI 错误排查文档

移除 `oneapi-errors` 相关文档文件,因其内容已过时或合并至其他章节。同步更新中英文文档目录和元数据文件中的引用。

* docs: 更新文档FAQ内容,移除过时条目并重新编号

- 删除关于OneAPI官网的过时FAQ条目
- 重新编号故障排除FAQ章节,使序号连续
- 同步更新中英文文档内容保持一致

---------

Co-authored-by: Archer <545436317@qq.com>
This commit is contained in:
zjj-225
2026-03-17 14:44:54 +08:00
committed by GitHub
parent 567d408158
commit f057a2ae19
21 changed files with 757 additions and 426 deletions
+27
View File
@@ -0,0 +1,27 @@
---
title: 注意
description: FastGPT注意事项
---
# 注意事项
在使用 FastGPT 过程中遇到问题时,请参考以下步骤进行排查和解决。
## 1. 版本检查与升级
很多已知问题已在最新版本中得到修复。在反馈问题前,请务必确认您的版本情况:
- **检查版本**:在 FastGPT 首页或管理后台查看当前运行的版本号。
- **升级建议**:如果当前不是最新版本,建议先参考 [更新指南](../development/upgrading) 升级至最新稳定版。
## 2. 问题排查步骤
若升级后问题依然存在,请按以下顺序排查:
- **查看日志**:检查 Docker 容器或服务器日志,寻找具体的错误报错信息(Error Stack)。
- **清理缓存**:尝试清理浏览器缓存或使用无痕模式重新访问。
- **环境检查**:确认数据库(MongoDB, PostgreSQL/Milvus)连接是否正常,以及 API 密钥是否有效。
## 3. 联系技术支持
若以上步骤均无法解决您的问题,请通过以下方式联系我们:
- **社区反馈**:在 GitHub Issues 或相关社群中搜索类似问题。
- **提供信息**:联系技术人员时,请务必提供:
- 当前使用的完整版本号。
- 问题的详细描述(包括复现步骤)。
- 相关的系统错误日志或截图。
+14 -14
View File
@@ -2,37 +2,33 @@
title: 知识库使用问题
description: 常见知识库使用问题
---
## (1)文件解析失败
未打开PDF增强解析。如果在上传文件设置参数时,没有打开【PDF增强解析】设置时,需要在Admin后台正确配置OCR模块以支持增强解析。
## 上传的文件内容出现中文乱码
## (2)上传的文件内容出现中文乱码
将文件另存为 UTF-8 编码格式。
## 知识库配置里的文件处理模型是什么?与索引模型有什么区别?
## (3)知识库配置里的文件处理模型是什么?与索引模型有什么区别?
* **文件处理模型**:用于数据处理的【增强处理】和【问答拆分】。在【增强处理】中,生成相关问题和摘要,在【问答拆分】中执行问答对生成。
* **索引模型**:用于向量化,即通过对文本数据进行处理和组织,构建出一个能够快速查询的数据结构。
## 知识库支持Excel类文件的导入
## (4)知识库支持Excel类文件的导入
xlsx等都可以上传的,不止支持CSV。
## 知识库tokens的计算方式
## (5)知识库tokens的计算方式
统一按gpt3.5标准。
## 误删除重排模型后,重排模型怎么加入到fastgpt
## (6)误删除重排模型后,重排模型怎么加入到fastgpt
![](/imgs/dataset3.png)
config.json文件里面配置后就可以勾选重排模型
## 线上平台上创建了应用和知识库,到期之后如果短期内不续费,数据是否会被清理。
## (7)线上平台上创建了应用和知识库,到期之后如果短期内不续费,数据是否会被清理。
免费版是三十天不登录后清空知识库,应用不会动。其他付费套餐到期后自动切免费版。
![](/imgs/dataset4.png)
## 基于知识库的查询,但是问题相关的答案过多。ai回答到一半就不继续回答。
## (8)基于知识库的查询,但是问题相关的答案过多。ai回答到一半就不继续回答。
FastGPT回复长度计算公式:
最大回复=min(配置的最大回复(内置的限制),最大上下文(输入和输出的总和)-历史记录)
@@ -44,6 +40,7 @@ FastGPT回复长度计算公式:
所以可以:
1. 检查配置的最大回复(回复上限)
2. 减小输入来增大输出,即减小历史记录,在工作流其实也就是“聊天记录”
配置的最大回复:
@@ -55,7 +52,7 @@ FastGPT回复长度计算公式:
另外私有化部署的时候,后台配模型参数,可以在配置最大上文时,预留一些空间,比如 128000 的模型,可以只配置 120000, 剩余的空间后续会被安排给输出
## 受到模型上下文的限制,有时候达不到聊天记录的轮次,连续对话字数过多就会报上下文不够的错误。
## (9)受到模型上下文的限制,有时候达不到聊天记录的轮次,连续对话字数过多就会报上下文不够的错误。
FastGPT回复长度计算公式:
@@ -77,3 +74,6 @@ FastGPT回复长度计算公式:
![](/imgs/dataset2.png)
另外,私有化部署的时候,后台配模型参数,可以在配置最大上文时,预留一些空间,比如 128000 的模型,可以只配置 120000, 剩余的空间后续会被安排给输出。
## (10)点击知识库页面一直闪烁
未配置索引模型,补齐索引模型配置。
+5 -1
View File
@@ -6,6 +6,10 @@ title: 报错
是oneapi渠道的问题,可以换个模型用或者换一家中转站
1. ### 使用API时在日志中报错Connection Error
2. ### 使用API时在日志中报错Connection Error
大概率是api-key填写了openapi,然后部署的服务器在国内,不能访问海外的api,可以使用中转或者反代的手段解决访问不到的问题
3. ### 开启图片索引报 400
需在 Admin→系统配置中正确配置OCR模型
-4
View File
@@ -2,10 +2,6 @@
title: Other Questions
---
## What's the OneAPI official website?
There's no official website — just the open-source README on GitHub: https://github.com/songquanpeng/one-api
## Is multi-user support available?
The community edition does not support multiple users. Multi-user support is only available in the commercial edition.
-4
View File
@@ -2,10 +2,6 @@
title: 其他问题
---
## oneapi 官网是哪个
只有开源的 README,没官网,GitHub: https://github.com/songquanpeng/one-api
## 想做多用户
社区版未支持多用户,仅商业版支持。
-393
View File
@@ -1,393 +0,0 @@
---
title: 私有部署常见问题
description: FastGPT 私有部署常见问题
---
## 一、错误排查方式
可以先找找[Issue](https://github.com/labring/FastGPT/issues),或新提 Issue,私有部署错误,务必提供详细的操作步骤、日志、截图,否则很难排查。
### 获取后端错误
1. `docker ps -a` 查看所有容器运行状态,检查是否全部 running,如有异常,尝试`docker logs 容器名`查看对应日志。
2. 容器都运行正常的,`docker logs 容器名` 查看报错日志
### 前端错误
前端报错时,页面会出现崩溃,并提示检查控制台日志。可以打开浏览器控制台,并查看`console`中的 log 日志。还可以点击对应 log 的超链接,会提示到具体错误文件,可以把这些详细错误信息提供,方便排查。
### OneAPI 错误
带有`requestId`的,都是 OneAPI 提示错误,大部分都是因为模型接口报错。可以参考 [OneAPI 常见错误](/docs/self-host/faq/#三常见的-oneapi-错误)
## 二、通用问题
### 前端页面崩溃
1. 90% 情况是模型配置不正确:确保每类模型都至少有一个启用;检查模型中一些`对象`参数是否异常(数组和对象),如果为空,可以尝试给个空数组或空对象。
2. 少部分是由于浏览器兼容问题,由于项目中包含一些高阶语法,可能低版本浏览器不兼容,可以将具体操作步骤和控制台中错误信息提供 issue。
3. 关闭浏览器翻译功能,如果浏览器开启了翻译,可能会导致页面崩溃。
### 通过sealos部署的话,是否没有本地部署的一些限制?
![](/imgs/faq1.png)
这是索引模型的长度限制,通过任何方式部署都一样的,但不同索引模型的配置不一样,可以在后台修改参数。
### 怎么挂载小程序配置文件
将验证文件,挂载到指定位置:/app/projects/app/public/xxxx.txt
然后重启。例如:
![](/imgs/faq2.png)
### 数据库3306端口被占用了,启动服务失败
![](/imgs/faq3.png)
把端口映射改成 3307 之类的,例如 3307:3306。
### 本地部署的限制
具体内容参考https://fael3z0zfze.feishu.cn/wiki/OFpAw8XzAi36Guk8dfucrCKUnjg。
### 能否纯本地运行
可以。需要准备好向量模型和LLM模型。
### 其他模型没法进行问题分类/内容提取
1. 看日志。如果提示 JSON invalidnot support tool 之类的,说明该模型不支持工具调用或函数调用,需要设置`toolChoice=false`和`functionCall=false`,就会默认走提示词模式。目前内置提示词仅针对了商业模型API进行测试。问题分类基本可用,内容提取不太行。
2. 如果已经配置正常,并且没有错误日志,则说明可能提示词不太适合该模型,可以通过修改`customCQPrompt`来自定义提示词。
### 页面崩溃
1. 关闭翻译
2. 检查配置文件是否正常加载,如果没有正常加载会导致缺失系统信息,在某些操作下会导致空指针。
- 95%情况是配置文件不对。会提示 xxx undefined
- 提示`URI malformed`,请 Issue 反馈具体操作和页面,这是由于特殊字符串编码解析报错。
3. 某些api不兼容问题(较少)
### 开启内容补全后,响应速度变慢
1. 问题补全需要经过一轮AI生成。
2. 会进行3~5轮的查询,如果数据库性能不足,会有明显影响。
### 页面中可以正常回复,API 报错
页面中是用 stream=true 模式,所以API也需要设置 stream=true 来进行测试。部分模型接口(国产居多)非 Stream 的兼容有点垃圾。
和上一个问题一样,curl 测试。
### 知识库索引没有进度/索引很慢
先看日志报错信息。有以下几种情况:
1. 可以对话,但是索引没有进度:没有配置向量模型(vectorModels
2. 不能对话,也不能索引:API调用失败。可能是没连上OneAPI或OpenAI
3. 有进度,但是非常慢:api key不行,OpenAI的免费号,一分钟只有3次还是60次。一天上限200次。
### Connection error
网络异常。国内服务器无法请求OpenAI,自行检查与AI模型的连接是否正常。
或者是FastGPT请求不到 OneAPI(没放同一个网络)
### 修改了 vectorModels 但是没有生效
1. 重启容器,确保模型配置已经加载(可以在日志或者新建知识库时候看到新模型)
2. 记得刷新一次浏览器。
3. 如果是已经创建的知识库,需要删除重建。向量模型是创建时候绑定的,不会动态更新。
## 三、常见的 OneAPI 错误
带有 requestId 的都是 OneAPI 的报错。
### insufficient_user_quota user quota is not enough
OneAPI 账号的余额不足,默认 root 用户只有 200 刀,可以手动修改。
路径:打开OneAPI -> 用户 -> root用户右边的编辑 -> 剩余余额调大
### xxx渠道找不到
FastGPT 模型配置文件中的 model 必须与 OneAPI 渠道中的模型对应上,否则就会提示这个错误。可检查下面内容:
1. OneAPI 中没有配置该模型渠道,或者被禁用了。
2. FastGPT 配置文件有 OneAPI 没有配置的模型。如果 OneAPI 没有配置对应模型的,配置文件中也不要写。
3. 使用旧的向量模型创建了知识库,后又更新了向量模型。这时候需要删除以前的知识库,重建。
如果OneAPI中,没有配置对应的模型,`config.json`中也不要配置,否则容易报错。
### 点击模型测试失败
OneAPI 只会测试渠道的第一个模型,并且只会测试对话模型,向量模型无法自动测试,需要手动发起请求进行测试。[查看测试模型命令示例](/docs/self-host/faq/#如何检查模型问题)
### get request url failed: Post `"https://xxx"` dial tcp: xxxx
OneAPI 与模型网络不通,需要检查网络配置。
### Incorrect API key provided: sk-xxxx.You can find your api Key at xxx
OneAPI 的 API Key 配置错误,需要修改`OPENAI_API_KEY`环境变量,并重启容器(先 docker-compose down 然后再 docker-compose up -d 运行一次)。
可以`exec`进入容器,`env`查看环境变量是否生效。
### bad_response_status_code bad response status code 503
1. 模型服务不可用
2. 模型接口参数异常(温度、max token等可能不适配)
3. ....
### Tiktoken 下载失败
由于 OneAPI 会在启动时从网络下载一个 tiktoken 的依赖,如果网络异常,就会导致启动失败。可以参考[OneAPI 离线部署](https://blog.csdn.net/wanh/article/details/139039216)解决。
## 四、常见模型问题
### 如何检查模型可用性问题
1. 私有部署模型,先确认部署的模型是否正常。
2. 通过 CURL 请求,直接测试上游模型是否正常运行(云端模型或私有模型均进行测试)
3. 通过 CURL 请求,请求 OneAPI 去测试模型是否正常。
4. 在 FastGPT 中使用该模型进行测试。
下面是几个测试 CURL 示例:
<Tabs items={['LLM模型','Embedding模型','Rerank 模型','TTS 模型','Whisper 模型']}>
<Tab value="LLM模型">
```bash
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
```
</Tab>
<Tab value="Embedding模型">
```bash
curl https://api.openai.com/v1/embeddings \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"input": "The food was delicious and the waiter...",
"model": "text-embedding-ada-002",
"encoding_format": "float"
}'
```
</Tab>
<Tab value="Rerank 模型">
```bash
curl --location --request POST 'https://xxxx.com/api/v1/rerank' \
--header 'Authorization: Bearer {{ACCESS_TOKEN}}' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "bge-rerank-m3",
"query": "导演是谁",
"documents": [
"你是谁?\n我是电影《铃芽之旅》助手"
]
}'
```
</Tab>
<Tab value="TTS 模型">
```bash
curl https://api.openai.com/v1/audio/speech \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "tts-1",
"input": "The quick brown fox jumped over the lazy dog.",
"voice": "alloy"
}' \
--output speech.mp3
```
</Tab>
<Tab value="Whisper 模型">
```bash
curl https://api.openai.com/v1/audio/transcriptions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: multipart/form-data" \
-F file="@/path/to/file/audio.mp3" \
-F model="whisper-1"
```
</Tab>
</Tabs>
### 报错 - 模型响应为空/模型报错
该错误是由于 stream 模式下,oneapi 直接结束了流请求,并且未返回任何内容导致。
4.8.10 版本新增了错误日志,报错时,会在日志中打印出实际发送的 Body 参数,可以复制该参数后,通过 curl 向 oneapi 发起请求测试。
由于 oneapi 在 stream 模式下,无法正确捕获错误,有时候可以设置成 `stream=false` 来获取到精确的错误。
可能的报错问题:
1. 国内模型命中风控
2. 不支持的模型参数:只保留 messages 和必要参数来测试,删除其他参数测试。
3. 参数不符合模型要求:例如有的模型 temperature 不支持 0,有些不支持两位小数。max_tokens 超出,上下文超长等。
4. 模型部署有问题,stream 模式不兼容。
测试示例如下,可复制报错日志中的请求体进行测试:
```bash
curl --location --request POST 'https://api.openai.com/v1/chat/completions' \
--header 'Authorization: Bearer sk-xxxx' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "xxx",
"temperature": 0.01,
"max_tokens": 1000,
"stream": true,
"messages": [
{
"role": "user",
"content": " 你是饿"
}
]
}'
```
### 如何测试模型是否支持工具调用
需要模型提供商和 oneapi 同时支持工具调用才可使用,测试方法如下:
##### 1. 通过 `curl` 向 `oneapi` 发起第一轮 stream 模式的 tool 测试。
```bash
curl --location --request POST 'https://oneapi.xxx/v1/chat/completions' \
--header 'Authorization: Bearer sk-xxxx' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "gpt-5",
"temperature": 0.01,
"max_tokens": 8000,
"stream": true,
"messages": [
{
"role": "user",
"content": "几点了"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "hCVbIY",
"description": "获取用户当前时区的时间。",
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
}
],
"tool_choice": "auto"
}'
```
##### 2. 检查响应参数
如果能正常调用工具,会返回对应 `tool_calls` 参数。
```json
{
"id": "chatcmpl-A7kwo1rZ3OHYSeIFgfWYxu8X2koN3",
"object": "chat.completion.chunk",
"created": 1726412126,
"model": "gpt-5",
"system_fingerprint": "fp_483d39d857",
"choices": [
{
"index": 0,
"id": "call_0n24eiFk8OUyIyrdEbLdirU7",
"type": "function",
"function": {
"name": "mEYIcFl84rYC",
"arguments": ""
}
}
],
"refusal": null
},
"logprobs": null,
"finish_reason": null
}
],
"usage": null
}
```
##### 3. 通过 `curl` 向 `oneapi` 发起第二轮 stream 模式的 tool 测试。
第二轮请求是把工具结果发送给模型。发起后会得到模型回答的结果。
```bash
curl --location --request POST 'https://oneapi.xxxx/v1/chat/completions' \
--header 'Authorization: Bearer sk-xxx' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "gpt-5",
"temperature": 0.01,
"max_tokens": 8000,
"stream": true,
"messages": [
{
"role": "user",
"content": "几点了"
},
{
"role": "assistant",
"tool_calls": [
{
"id": "kDia9S19c4RO",
"type": "function",
"function": {
"name": "hCVbIY",
"arguments": "{}"
}
}
]
},
{
"tool_call_id": "kDia9S19c4RO",
"role": "tool",
"name": "hCVbIY",
"content": "{\n \"time\": \"2024-09-14 22:59:21 Sunday\"\n}"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "hCVbIY",
"description": "获取用户当前时区的时间。",
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
}
],
"tool_choice": "auto"
}'
```
### 向量检索得分大于 1
由于模型没有归一化导致的。目前仅支持归一化的模型。
+4 -2
View File
@@ -13,8 +13,10 @@
"config/json",
"config/signoz",
"---Troubleshooting---",
"faq",
"troubleshooting/object-storage",
"troubleshooting/faq",
"troubleshooting/methods",
"troubleshooting/model-errors",
"troubleshooting/s3-issues",
"---Version Upgrades---",
"upgrading/upgrade-intruction",
"...upgrading",
+4 -1
View File
@@ -13,7 +13,10 @@
"config/json",
"config/signoz",
"---故障排查---",
"faq",
"troubleshooting/faq",
"troubleshooting/methods",
"troubleshooting/model-errors",
"troubleshooting/s3-issues",
"---版本升级---",
"upgrading/upgrade-intruction",
"...upgrading",
@@ -0,0 +1,95 @@
---
title: General Troubleshooting
description: FastGPT Self-Hosting General Troubleshooting
---
### (1)Frontend Page Crash
1. 90% of cases are due to incorrect model configuration: ensure that at least one model is enabled for each category; check if some `object` parameters in the model are abnormal (arrays and objects). If empty, try giving an empty array or empty object.
2. A small part is due to browser compatibility issues. Since the project contains some high-level syntax, lower version browsers may not be compatible. You can provide specific operation steps and error information in the console to the issue.
3. Turn off the browser translation function. If the browser has translation enabled, it may cause the page to crash.
---
### (2)If deployed via sealos, are there no limitations of local deployment?
![](/imgs/faq1.png)
This is the length limit of the indexing model. It is the same regardless of the deployment method, but the configuration of different indexing models is different, and parameters can be modified in the background.
---
### (3)How to mount the Mini Program configuration file
Mount the verification file to the specified location: /app/projects/app/public/xxxx.txt
Then restart. For example:
![](/imgs/faq2.png)
---
### (4)Database port 3306 is occupied, service startup failed
![](/imgs/faq3.png)
Change the port mapping to 3307 or similar, for example 3307:3306.
---
### (5)Can it run purely locally?
Yes. You need to prepare the vector model and LLM model.
---
### (6)Other models cannot perform question classification/content extraction
1. Check the logs. If it prompts JSON invalid, not support tool, etc., it means that the model does not support tool calling or function calling. You need to set `toolChoice=false` and `functionCall=false`, and it will default to the prompt mode. Currently, the built-in prompts are only tested for commercial model APIs. Question classification is basically usable, but content extraction is not very good.
2. If the configuration is normal and there are no error logs, it means that the prompt may not be suitable for the model. You can customize the prompt by modifying `customCQPrompt`.
---
### (7)Page Crash
1. Turn off translation.
2. Check if the configuration file is loaded normally. If it is not loaded normally, system information will be missing, and it will cause a null pointer in some operations.
- 95% of cases are incorrect configuration files. It will prompt xxx undefined.
- Prompt `URI malformed`, please Issue feedback specific operations and pages, this is due to special string encoding parsing errors.
3. Some api incompatibility issues (rare).
---
### (8)After enabling content completion, the response speed becomes slow
1. Question completion requires a round of AI generation.
2. 3~5 rounds of queries will be performed. If the database performance is insufficient, there will be a significant impact.
---
### (9)Normal reply in the page, API error
The page uses stream=true mode, so the API also needs to set stream=true for testing. Some model interfaces (mostly domestic) are a bit garbage in non-Stream compatibility.
Same as the previous question, curl test.
---
### (10)Knowledge base indexing has no progress/indexing is very slow
First look at the log error information. There are several situations:
1. Can verify, but indexing has no progress: vector model (vectorModels) is not configured.
2. Cannot verify, nor index: API call failed. Maybe not connected to OneAPI or OpenAI.
3. Has progress, but very slow: api key is not good, OpenAI free account, only 3 times or 60 times a minute. 200 times a day limit.
---
### (11)Connection error
Network exception. Domestic servers cannot request OpenAI, check whether the connection with the AI model is normal.
Or FastGPT cannot request OneAPI (not in the same network).
---
@@ -0,0 +1,102 @@
---
title: 通用问题排查
description: FastGPT 私有部署常见问题排查方式
---
### (1)前端页面崩溃
1. 90% 情况是模型配置不正确:确保每类模型都至少有一个启用;检查模型中一些`对象`参数是否异常(数组和对象),如果为空,可以尝试给个空数组或空对象。
2. 少部分是由于浏览器兼容问题,由于项目中包含一些高阶语法,可能低版本浏览器不兼容,可以将具体操作步骤和控制台中错误信息提供 issue。
3. 关闭浏览器翻译功能,如果浏览器开启了翻译,可能会导致页面崩溃。
---
### (2)通过sealos部署的话,是否没有本地部署的一些限制?
![](/imgs/faq1.png)
这是索引模型的长度限制,通过任何方式部署都一样的,但不同索引模型的配置不一样,可以在后台修改参数。
---
### (3)怎么挂载小程序配置文件
将验证文件,挂载到指定位置:/app/projects/app/public/xxxx.txt
然后重启。例如:
![](/imgs/faq2.png)
---
### (4)数据库3306端口被占用了,启动服务失败
![](/imgs/faq3.png)
把端口映射改成 3307 之类的,例如 3307:3306。
---
### (5)能否纯本地运行
可以。需要准备好向量模型和LLM模型。
---
### (6)其他模型没法进行问题分类/内容提取
1. 看日志。如果提示 JSON invalidnot support tool 之类的,说明该模型不支持工具调用或函数调用,需要设置`toolChoice=false`和`functionCall=false`,就会默认走提示词模式。目前内置提示词仅针对了商业模型API进行测试。问题分类基本可用,内容提取不太行。
2. 如果已经配置正常,并且没有错误日志,则说明可能提示词不太适合该模型,可以通过修改`customCQPrompt`来自定义提示词。
---
### (7)页面崩溃
1. 关闭翻译
2. 检查配置文件是否正常加载,如果没有正常加载会导致缺失系统信息,在某些操作下会导致空指针。
- 95%情况是配置文件不对。会提示 xxx undefined
- 提示`URI malformed`,请 Issue 反馈具体操作和页面,这是由于特殊字符串编码解析报错。
3. 某些api不兼容问题(较少)
---
### (8)开启内容补全后,响应速度变慢
1. 问题补全需要经过一轮AI生成。
2. 会进行3~5轮的查询,如果数据库性能不足,会有明显影响。
---
### (9)页面中可以正常回复,API 报错
页面中是用 stream=true 模式,所以API也需要设置 stream=true 来进行测试。部分模型接口(国产居多)非 Stream 的兼容有点垃圾。
和上一个问题一样,curl 测试。
---
### (10)知识库索引没有进度/索引很慢
先看日志报错信息。有以下几种情况:
1. 可以对话,但是索引没有进度:没有配置向量模型(vectorModels
2. 不能对话,也不能索引:API调用失败。可能是没连上OneAPI或OpenAI
3. 有进度,但是非常慢:api key不行,OpenAI的免费号,一分钟只有3次还是60次。一天上限200次。
---
### (11)Connection error
网络异常。国内服务器无法请求OpenAI,自行检查与AI模型的连接是否正常。
或者是FastGPT请求不到 OneAPI(没放同一个网络)
---
### (12)修改了 vectorModels 但是没有生效
1. 重启容器,确保模型配置已经加载(可以在日志或者新建知识库时候看到新模型)
2. 记得刷新一次浏览器。
3. 如果是已经创建的知识库,需要删除重建。向量模型是创建时候绑定的,不会动态更新。
---
@@ -0,0 +1,27 @@
---
title: Troubleshooting Methods
description: FastGPT Self-Hosting Common Troubleshooting Methods
---
## 1. Troubleshooting Methods
You can first look for [Issue](https://github.com/labring/FastGPT/issues), or raise a new Issue. For private deployment errors, be sure to provide detailed operation steps, logs, and screenshots, otherwise it is difficult to troubleshoot.
### (1) Get Backend Errors
1. `docker ps -a` View the running status of all containers, check if they are all running. If there is an abnormality, try `docker logs container_name` to view the corresponding log.
2. If the containers are running normally, `docker logs container_name` to view the error log.
---
### (2) Frontend Errors
When a frontend error occurs, the page will crash and prompt to check the console log. You can open the browser console and view the log in `console`. You can also click the hyperlink of the corresponding log, which will prompt to the specific error file. You can provide these detailed error information to facilitate troubleshooting.
---
### (3) OneAPI Errors
Those with `requestId` are OneAPI prompts errors, mostly due to model interface errors. You can refer to [OneAPI Common Errors](/docs/self-host/faq/#三常见的-oneapi-错误)
---
@@ -0,0 +1,27 @@
---
title: 错误排查方式
description: FastGPT 私有部署常见问题排查方式
---
## 一、错误排查方式
可以先找找[Issue](https://github.com/labring/FastGPT/issues),或新提 Issue,私有部署错误,务必提供详细的操作步骤、日志、截图,否则很难排查。
### (1)获取后端错误
1. `docker ps -a` 查看所有容器运行状态,检查是否全部 running,如有异常,尝试`docker logs 容器名`查看对应日志。
2. 容器都运行正常的,`docker logs 容器名` 查看报错日志
---
### (2)前端错误
前端报错时,页面会出现崩溃,并提示检查控制台日志。可以打开浏览器控制台,并查看`console`中的 log 日志。还可以点击对应 log 的超链接,会提示到具体错误文件,可以把这些详细错误信息提供,方便排查。
---
### (3)OneAPI 错误
带有`requestId`的,都是 OneAPI 提示错误,大部分都是因为模型接口报错。可以参考 [OneAPI 常见错误](/docs/self-host/faq/#三常见的-oneapi-错误)
---
@@ -0,0 +1,100 @@
---
title: Model Availability Troubleshooting
description: FastGPT Self-Hosting Model Availability Troubleshooting
---
### (1) How to check model availability issues
1. For privately deployed models, first confirm whether the deployed model is normal.
2. Directly test whether the upstream model is running normally through CURL request (cloud model or private model are both tested).
3. Request OneAPI through CURL request to test whether the model is normal.
4. Use the model for testing in FastGPT.
Here are a few test CURL examples:
<Tabs items={['LLM Model','Embedding Model','Rerank Model','TTS Model','Whisper Model']}>
<Tab value="LLM Model">
```bash
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
```
</Tab>
<Tab value="Embedding Model">
```bash
curl https://api.openai.com/v1/embeddings \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"input": "The food was delicious and the waiter...",
"model": "text-embedding-ada-002",
"encoding_format": "float"
}'
```
</Tab>
<Tab value="Rerank Model">
```bash
curl --location --request POST 'https://xxxx.com/api/v1/rerank' \
--header 'Authorization: Bearer {{ACCESS_TOKEN}}' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "bge-rerank-m3",
"query": "Who is the director",
"documents": [
"Who are you?\nI am the assistant of the movie 'Suzume'"
]
}'
```
</Tab>
<Tab value="TTS Model">
```bash
curl https://api.openai.com/v1/audio/speech \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "tts-1",
"input": "The quick brown fox jumped over the lazy dog.",
"voice": "alloy"
}' \
--output speech.mp3
```
</Tab>
<Tab value="Whisper Model">
```bash
curl https://api.openai.com/v1/audio/transcriptions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: multipart/form-data" \
-F file="@/path/to/file/audio.mp3" \
-F model="whisper-1"
```
</Tab>
</Tabs>
---
### (2) Error - Model response is empty/Model error
This error is due to the fact that under stream mode, oneapi directly ended the stream request and did not return any content.
Version 4.8.10 added error logs. When an error occurs, the actual Body parameters sent will be printed in the log. You can copy the parameters and send a request test to oneapi through curl.
Since oneapi cannot correctly capture errors in stream mode, sometimes you can set `stream=false` to get the exact error.
Possible error issues:
1. Domestic models hit risk control.
2. Unsupported model parameters: only keep messages and necessary parameters for testing, delete other parameters for testing.
@@ -0,0 +1,257 @@
---
title: 模型可用性问题排查
description: FastGPT 私有部署模型可用性问题排查
---
### (1)如何检查模型可用性问题
1. 私有部署模型,先确认部署的模型是否正常。
2. 通过 CURL 请求,直接测试上游模型是否正常运行(云端模型或私有模型均进行测试)
3. 通过 CURL 请求,请求 OneAPI 去测试模型是否正常。
4. 在 FastGPT 中使用该模型进行测试。
下面是几个测试 CURL 示例:
<Tabs items={['LLM模型','Embedding模型','Rerank 模型','TTS 模型','Whisper 模型']}>
<Tab value="LLM模型">
```bash
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
```
</Tab>
<Tab value="Embedding模型">
```bash
curl https://api.openai.com/v1/embeddings \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"input": "The food was delicious and the waiter...",
"model": "text-embedding-ada-002",
"encoding_format": "float"
}'
```
</Tab>
<Tab value="Rerank 模型">
```bash
curl --location --request POST 'https://xxxx.com/api/v1/rerank' \
--header 'Authorization: Bearer {{ACCESS_TOKEN}}' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "bge-rerank-m3",
"query": "导演是谁",
"documents": [
"你是谁?\n我是电影《铃芽之旅》助手"
]
}'
```
</Tab>
<Tab value="TTS 模型">
```bash
curl https://api.openai.com/v1/audio/speech \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "tts-1",
"input": "The quick brown fox jumped over the lazy dog.",
"voice": "alloy"
}' \
--output speech.mp3
```
</Tab>
<Tab value="Whisper 模型">
```bash
curl https://api.openai.com/v1/audio/transcriptions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: multipart/form-data" \
-F file="@/path/to/file/audio.mp3" \
-F model="whisper-1"
```
</Tab>
</Tabs>
---
### (2)报错 - 模型响应为空/模型报错
该错误是由于 stream 模式下,oneapi 直接结束了流请求,并且未返回任何内容导致。
4.8.10 版本新增了错误日志,报错时,会在日志中打印出实际发送的 Body 参数,可以复制该参数后,通过 curl 向 oneapi 发起请求测试。
由于 oneapi 在 stream 模式下,无法正确捕获错误,有时候可以设置成 `stream=false` 来获取到精确的错误。
可能的报错问题:
1. 国内模型命中风控
2. 不支持的模型参数:只保留 messages 和必要参数来测试,删除其他参数测试。
3. 参数不符合模型要求:例如有的模型 temperature 不支持 0,有些不支持两位小数。max_tokens 超出,上下文超长等。
4. 模型部署有问题,stream 模式不兼容。
测试示例如下,可复制报错日志中的请求体进行测试:
```bash
curl --location --request POST 'https://api.openai.com/v1/chat/completions' \
--header 'Authorization: Bearer sk-xxxx' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "xxx",
"temperature": 0.01,
"max_tokens": 1000,
"stream": true,
"messages": [
{
"role": "user",
"content": " 你是饿"
}
]
}'
```
---
### (3)如何测试模型是否支持工具调用
需要模型提供商和 oneapi 同时支持工具调用才可使用,测试方法如下:
##### 1. 通过 `curl` 向 `oneapi` 发起第一轮 stream 模式的 tool 测试。
```bash
curl --location --request POST 'https://oneapi.xxx/v1/chat/completions' \
--header 'Authorization: Bearer sk-xxxx' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "gpt-5",
"temperature": 0.01,
"max_tokens": 8000,
"stream": true,
"messages": [
{
"role": "user",
"content": "几点了"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "hCVbIY",
"description": "获取用户当前时区的时间。",
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
}
],
"tool_choice": "auto"
}'
```
##### 2. 检查响应参数
如果能正常调用工具,会返回对应 `tool_calls` 参数。
```json
{
"id": "chatcmpl-A7kwo1rZ3OHYSeIFgfWYxu8X2koN3",
"object": "chat.completion.chunk",
"created": 1726412126,
"model": "gpt-5",
"system_fingerprint": "fp_483d39d857",
"choices": [
{
"index": 0,
"id": "call_0n24eiFk8OUyIyrdEbLdirU7",
"type": "function",
"function": {
"name": "mEYIcFl84rYC",
"arguments": ""
}
}
],
"refusal": null
},
"logprobs": null,
"finish_reason": null
}
],
"usage": null
}
```
##### 3. 通过 `curl` 向 `oneapi` 发起第二轮 stream 模式的 tool 测试。
第二轮请求是把工具结果发送给模型。发起后会得到模型回答的结果。
```bash
curl --location --request POST 'https://oneapi.xxxx/v1/chat/completions' \
--header 'Authorization: Bearer sk-xxx' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "gpt-5",
"temperature": 0.01,
"max_tokens": 8000,
"stream": true,
"messages": [
{
"role": "user",
"content": "几点了"
},
{
"role": "assistant",
"tool_calls": [
{
"id": "kDia9S19c4RO",
"type": "function",
"function": {
"name": "hCVbIY",
"arguments": "{}"
}
}
]
},
{
"tool_call_id": "kDia9S19c4RO",
"role": "tool",
"name": "hCVbIY",
"content": "{\n \"time\": \"2024-09-14 22:59:21 Sunday\"\n}"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "hCVbIY",
"description": "获取用户当前时区的时间。",
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
}
],
"tool_choice": "auto"
}'
```
---
### (4)向量检索得分大于 1
由于模型没有归一化导致的。目前仅支持归一化的模型。
@@ -0,0 +1,36 @@
---
title: S3 Issues Troubleshooting
description: FastGPT Self-Hosting Common S3 Issues Troubleshooting
---
## 1. Log shows ERR level "Failed to ensure external public/private bucket exists", resulting in inability to connect to object storage
### 1.1 Error Stack Display
- error: Error: getaddrinfo ENOTFOUND
Example
- ![](/imgs/faq4.png)
### Possible Errors
- STORAGE_S3_FORCE_PATH_STYLE configuration error
### Solution
- Turn on the STORAGE_S3_FORCE_PATH_STYLE option to `true`, otherwise the client cannot find the target service
---
## 2. Upload conversation file / knowledge base file error
Example
- ![](/imgs/faq5.png)
### 2.1 SignatureDoesNotMatched
- Signature inconsistency, mostly due to Nginx configuration error
### Possible Errors
- Necessary request headers (such as Headers, Host) were not passed during Nginx forwarding
### Solution
- Configure proxy_set_header Host $http_host, do not set to $host, Nginx's $host built-in variable will remove the port, set to $http_host
---
@@ -0,0 +1,36 @@
---
title: S3问题排查
description: FastGPT 私有部署常见问题排查方式
---
## 1. 日志出现 ERR 等级的 “Failed to ensure external public/private bucket exists”,导致无法连接上对象存储
### 1.1 错误栈显示
- error: Error: getaddrinfo ENOTFOUND
示例
- ![](/imgs/faq4.png)
### 可能的错误
- STORAGE_S3_FORCE_PATH_STYLE 配置错误
### 解决
- 将 STORAGE_S3_FORCE_PATH_STYLE 选项开启为 `true`,否则客户端无法找到目标服务
---
## 2. 上传对话文件 / 知识库文件报错
示例
- ![](/imgs/faq5.png)
### 2.1 SignatureDoesNotMatched
- 签名不一致,大部分情况是因为 Nginx 配置错误
### 可能的错误
- Nginx 转发时未透传必要的请求头(如 Headers、Host
### 解决
- 配置 proxy_set_header Host $http_host,不要设置成 $hostNginx 的 $host内置变量会把端口去掉,要设置成 $http_host
---
+4
View File
@@ -101,6 +101,10 @@ description: FastGPT Toc
- [/en/docs/self-host/index](/en/docs/self-host/index)
- [/en/docs/self-host/migration/docker_db](/en/docs/self-host/migration/docker_db)
- [/en/docs/self-host/migration/docker_mongo](/en/docs/self-host/migration/docker_mongo)
- [/en/docs/self-host/troubleshooting/faq](/en/docs/self-host/troubleshooting/faq)
- [/en/docs/self-host/troubleshooting/methods](/en/docs/self-host/troubleshooting/methods)
- [/en/docs/self-host/troubleshooting/model-errors](/en/docs/self-host/troubleshooting/model-errors)
- [/en/docs/self-host/troubleshooting/s3-issues](/en/docs/self-host/troubleshooting/s3-issues)
- [/en/docs/self-host/upgrading/4-12/4120](/en/docs/self-host/upgrading/4-12/4120)
- [/en/docs/self-host/upgrading/4-12/4121](/en/docs/self-host/upgrading/4-12/4121)
- [/en/docs/self-host/upgrading/4-12/4122](/en/docs/self-host/upgrading/4-12/4122)
+5 -1
View File
@@ -4,6 +4,7 @@ description: FastGPT 文档目录
---
- [/docs/faq/app](/docs/faq/app)
- [/docs/faq/attention](/docs/faq/attention)
- [/docs/faq/chat](/docs/faq/chat)
- [/docs/faq/dataset](/docs/faq/dataset)
- [/docs/faq/error](/docs/faq/error)
@@ -97,10 +98,13 @@ description: FastGPT 文档目录
- [/docs/self-host/design/dataset](/docs/self-host/design/dataset)
- [/docs/self-host/design/design_plugin](/docs/self-host/design/design_plugin)
- [/docs/self-host/dev](/docs/self-host/dev)
- [/docs/self-host/faq](/docs/self-host/faq)
- [/docs/self-host/index](/docs/self-host/index)
- [/docs/self-host/migration/docker_db](/docs/self-host/migration/docker_db)
- [/docs/self-host/migration/docker_mongo](/docs/self-host/migration/docker_mongo)
- [/docs/self-host/troubleshooting/faq](/docs/self-host/troubleshooting/faq)
- [/docs/self-host/troubleshooting/methods](/docs/self-host/troubleshooting/methods)
- [/docs/self-host/troubleshooting/model-errors](/docs/self-host/troubleshooting/model-errors)
- [/docs/self-host/troubleshooting/s3-issues](/docs/self-host/troubleshooting/s3-issues)
- [/docs/self-host/upgrading/4-12/4120](/docs/self-host/upgrading/4-12/4120)
- [/docs/self-host/upgrading/4-12/4121](/docs/self-host/upgrading/4-12/4121)
- [/docs/self-host/upgrading/4-12/4122](/docs/self-host/upgrading/4-12/4122)
+14 -6
View File
@@ -1,12 +1,13 @@
{
"document/content/docs/faq/app.en.mdx": "2026-02-26T22:14:30+08:00",
"document/content/docs/faq/app.mdx": "2025-08-02T19:38:37+08:00",
"document/content/docs/faq/attention.mdx": "2026-02-26T11:41:53+08:00",
"document/content/docs/faq/chat.en.mdx": "2026-02-26T22:14:30+08:00",
"document/content/docs/faq/chat.mdx": "2025-08-02T19:38:37+08:00",
"document/content/docs/faq/dataset.en.mdx": "2026-02-26T22:14:30+08:00",
"document/content/docs/faq/dataset.mdx": "2025-08-02T19:38:37+08:00",
"document/content/docs/faq/dataset.mdx": "2026-02-26T11:41:53+08:00",
"document/content/docs/faq/error.en.mdx": "2026-02-26T22:14:30+08:00",
"document/content/docs/faq/error.mdx": "2025-12-10T20:07:05+08:00",
"document/content/docs/faq/error.mdx": "2026-02-26T11:41:53+08:00",
"document/content/docs/faq/external_channel_integration.en.mdx": "2026-02-26T22:14:30+08:00",
"document/content/docs/faq/external_channel_integration.mdx": "2025-08-02T19:38:37+08:00",
"document/content/docs/faq/index.en.mdx": "2026-02-26T22:14:30+08:00",
@@ -190,13 +191,20 @@
"document/content/docs/self-host/dev.en.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/dev.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/faq.en.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/faq.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/index.en.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/index.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/migration/docker_db.en.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/migration/docker_db.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/migration/docker_mongo.en.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/migration/docker_mongo.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/troubleshooting/faq.en.mdx": "2026-03-12T15:26:02+08:00",
"document/content/docs/self-host/troubleshooting/faq.mdx": "2026-03-12T15:26:02+08:00",
"document/content/docs/self-host/troubleshooting/methods.en.mdx": "2026-03-12T15:26:02+08:00",
"document/content/docs/self-host/troubleshooting/methods.mdx": "2026-03-12T15:26:02+08:00",
"document/content/docs/self-host/troubleshooting/model-errors.en.mdx": "2026-03-12T15:26:02+08:00",
"document/content/docs/self-host/troubleshooting/model-errors.mdx": "2026-03-12T15:26:02+08:00",
"document/content/docs/self-host/troubleshooting/s3-issues.en.mdx": "2026-03-12T15:26:02+08:00",
"document/content/docs/self-host/troubleshooting/s3-issues.mdx": "2026-03-12T15:26:02+08:00",
"document/content/docs/self-host/upgrading/4-12/4120.en.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/upgrading/4-12/4120.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/upgrading/4-12/4121.en.mdx": "2026-03-03T17:39:47+08:00",
@@ -235,7 +243,7 @@
"document/content/docs/self-host/upgrading/4-14/4148.mdx": "2026-03-09T17:39:53+08:00",
"document/content/docs/self-host/upgrading/4-14/41481.en.mdx": "2026-03-09T12:02:02+08:00",
"document/content/docs/self-host/upgrading/4-14/41481.mdx": "2026-03-09T17:39:53+08:00",
"document/content/docs/self-host/upgrading/4-14/4149.mdx": "2026-03-13T18:08:05+08:00",
"document/content/docs/self-host/upgrading/4-14/4149.mdx": "2026-03-12T20:51:00+08:00",
"document/content/docs/self-host/upgrading/outdated/40.en.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/upgrading/outdated/40.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/upgrading/outdated/41.en.mdx": "2026-03-03T17:39:47+08:00",
@@ -376,8 +384,8 @@
"document/content/docs/self-host/upgrading/outdated/499.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/upgrading/upgrade-intruction.en.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/self-host/upgrading/upgrade-intruction.mdx": "2026-03-03T17:39:47+08:00",
"document/content/docs/toc.en.mdx": "2026-03-09T12:02:02+08:00",
"document/content/docs/toc.mdx": "2026-03-11T23:15:17+08:00",
"document/content/docs/toc.en.mdx": "2026-03-12T15:48:21+08:00",
"document/content/docs/toc.mdx": "2026-03-12T15:48:21+08:00",
"document/content/docs/use-cases/app-cases/dalle3.en.mdx": "2026-02-26T22:14:30+08:00",
"document/content/docs/use-cases/app-cases/dalle3.mdx": "2025-07-23T21:35:03+08:00",
"document/content/docs/use-cases/app-cases/english_essay_correction_bot.en.mdx": "2026-02-26T22:14:30+08:00",
Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 511 KiB