mirror of
https://github.com/labring/FastGPT.git
synced 2026-04-27 02:08:10 +08:00
765ec526cc
* docs(faq): 更新常见问题文档并新增注意事项页面 - 在 error.mdx 中调整问题序号并新增 OCR 配置问题 - 新增 attention.mdx 页面,提供问题排查步骤和技术支持指引 - 重构 dataset.mdx 内容结构,使用数字序号并补充知识库闪烁问题 * docs: 重构私有部署故障排查文档结构并新增详细指南 将原有的 FAQ 文档拆分为多个专题文档,包括通用问题排查、S3问题、OneAPI错误、模型可用性问题和排查方法。更新了导航菜单和目录结构,使文档组织更清晰,便于用户快速定位和解决特定问题。 新增了详细的故障排查步骤、CURL测试示例和具体错误解决方案,特别是针对对象存储连接、签名错误和模型调用失败等常见问题提供了更全面的指导。 * docs: 移除已弃用的 OneAPI 错误排查文档 移除 `oneapi-errors` 相关文档文件,因其内容已过时或合并至其他章节。同步更新中英文文档目录和元数据文件中的引用。 * docs: 更新文档FAQ内容,移除过时条目并重新编号 - 删除关于OneAPI官网的过时FAQ条目 - 重新编号故障排除FAQ章节,使序号连续 - 同步更新中英文文档内容保持一致 * docs: 重构FAQ和自托管文档结构,合并错误排查内容 将原FAQ中的“报错”和“注意”章节迁移至自托管文档的“故障排查”目录下 在model-errors.mdx中整合常见错误解决方案,如“上游负载饱和”和“Connection Error” 更新meta.json和toc.mdx文件以反映新的文档结构 --------- Co-authored-by: Archer <545436317@qq.com>
119 lines
3.6 KiB
Plaintext
119 lines
3.6 KiB
Plaintext
---
|
|
title: Model Troubleshooting
|
|
description: FastGPT Self-Hosting Model Troubleshooting
|
|
---
|
|
|
|
|
|
### (1) How to check model availability issues
|
|
|
|
1. For privately deployed models, first confirm whether the deployed model is normal.
|
|
2. Directly test whether the upstream model is running normally through CURL request (cloud model or private model are both tested).
|
|
3. Request OneAPI through CURL request to test whether the model is normal.
|
|
4. Use the model for testing in FastGPT.
|
|
|
|
Here are a few test CURL examples:
|
|
|
|
<Tabs items={['LLM Model','Embedding Model','Rerank Model','TTS Model','Whisper Model']}>
|
|
<Tab value="LLM Model">
|
|
```bash
|
|
curl https://api.openai.com/v1/chat/completions \
|
|
-H "Content-Type: application/json" \
|
|
-H "Authorization: Bearer $OPENAI_API_KEY" \
|
|
-d '{
|
|
"model": "gpt-4o",
|
|
"messages": [
|
|
{
|
|
"role": "system",
|
|
"content": "You are a helpful assistant."
|
|
},
|
|
{
|
|
"role": "user",
|
|
"content": "Hello!"
|
|
}
|
|
]
|
|
}'
|
|
```
|
|
</Tab>
|
|
<Tab value="Embedding Model">
|
|
```bash
|
|
curl https://api.openai.com/v1/embeddings \
|
|
-H "Authorization: Bearer $OPENAI_API_KEY" \
|
|
-H "Content-Type: application/json" \
|
|
-d '{
|
|
"input": "The food was delicious and the waiter...",
|
|
"model": "text-embedding-ada-002",
|
|
"encoding_format": "float"
|
|
}'
|
|
```
|
|
</Tab>
|
|
<Tab value="Rerank Model">
|
|
```bash
|
|
curl --location --request POST 'https://xxxx.com/api/v1/rerank' \
|
|
--header 'Authorization: Bearer {{ACCESS_TOKEN}}' \
|
|
--header 'Content-Type: application/json' \
|
|
--data-raw '{
|
|
"model": "bge-rerank-m3",
|
|
"query": "Who is the director",
|
|
"documents": [
|
|
"Who are you?\nI am the assistant of the movie 'Suzume'"
|
|
]
|
|
}'
|
|
```
|
|
</Tab>
|
|
<Tab value="TTS Model">
|
|
```bash
|
|
curl https://api.openai.com/v1/audio/speech \
|
|
-H "Authorization: Bearer $OPENAI_API_KEY" \
|
|
-H "Content-Type: application/json" \
|
|
-d '{
|
|
"model": "tts-1",
|
|
"input": "The quick brown fox jumped over the lazy dog.",
|
|
"voice": "alloy"
|
|
}' \
|
|
--output speech.mp3
|
|
```
|
|
</Tab>
|
|
<Tab value="Whisper Model">
|
|
```bash
|
|
curl https://api.openai.com/v1/audio/transcriptions \
|
|
-H "Authorization: Bearer $OPENAI_API_KEY" \
|
|
-H "Content-Type: multipart/form-data" \
|
|
-F file="@/path/to/file/audio.mp3" \
|
|
-F model="whisper-1"
|
|
```
|
|
</Tab>
|
|
</Tabs>
|
|
|
|
---
|
|
|
|
### (2) Error - Model response is empty/Model error
|
|
|
|
This error is due to the fact that under stream mode, oneapi directly ended the stream request and did not return any content.
|
|
|
|
Version 4.8.10 added error logs. When an error occurs, the actual Body parameters sent will be printed in the log. You can copy the parameters and send a request test to oneapi through curl.
|
|
|
|
Since oneapi cannot correctly capture errors in stream mode, sometimes you can set `stream=false` to get the exact error.
|
|
|
|
Possible error issues:
|
|
|
|
1. Domestic models hit risk control.
|
|
2. Unsupported model parameters: only keep messages and necessary parameters for testing, delete other parameters for testing.
|
|
|
|
---
|
|
|
|
### (3) "Current group upstream load is saturated, please try again later"
|
|
|
|
If you encounter this error (e.g. `request id:xxx`) in the logs or response, this is typically an OneAPI channel issue. Try switching to a different model or a different relay provider.
|
|
|
|
---
|
|
|
|
### (4) "Connection Error" in logs when using the API
|
|
|
|
Most likely the API key is pointing to OpenAI's endpoint, but the server is deployed in mainland China and can't reach overseas endpoints. Use a relay service or reverse proxy to resolve the connectivity issue.
|
|
|
|
---
|
|
|
|
### (5) Enable image indexing reports 400
|
|
|
|
You need to correctly configure the OCR model in `Admin` -> `System Configuration`.
|