mirror of
https://github.com/labring/FastGPT.git
synced 2026-05-08 01:08:43 +08:00
119 lines
3.6 KiB
Plaintext
119 lines
3.6 KiB
Plaintext
---
|
|
title: Model Troubleshooting
|
|
description: FastGPT Self-Hosting Model Troubleshooting
|
|
---
|
|
|
|
|
|
### (1) How to check model availability issues
|
|
|
|
1. For privately deployed models, first confirm whether the deployed model is normal.
|
|
2. Directly test whether the upstream model is running normally through CURL request (cloud model or private model are both tested).
|
|
3. Request OneAPI through CURL request to test whether the model is normal.
|
|
4. Use the model for testing in FastGPT.
|
|
|
|
Here are a few test CURL examples:
|
|
|
|
<Tabs items={['LLM Model','Embedding Model','Rerank Model','TTS Model','Whisper Model']}>
|
|
<Tab value="LLM Model">
|
|
```bash
|
|
curl https://api.openai.com/v1/chat/completions \
|
|
-H "Content-Type: application/json" \
|
|
-H "Authorization: Bearer $OPENAI_API_KEY" \
|
|
-d '{
|
|
"model": "gpt-4o",
|
|
"messages": [
|
|
{
|
|
"role": "system",
|
|
"content": "You are a helpful assistant."
|
|
},
|
|
{
|
|
"role": "user",
|
|
"content": "Hello!"
|
|
}
|
|
]
|
|
}'
|
|
```
|
|
</Tab>
|
|
<Tab value="Embedding Model">
|
|
```bash
|
|
curl https://api.openai.com/v1/embeddings \
|
|
-H "Authorization: Bearer $OPENAI_API_KEY" \
|
|
-H "Content-Type: application/json" \
|
|
-d '{
|
|
"input": "The food was delicious and the waiter...",
|
|
"model": "text-embedding-ada-002",
|
|
"encoding_format": "float"
|
|
}'
|
|
```
|
|
</Tab>
|
|
<Tab value="Rerank Model">
|
|
```bash
|
|
curl --location --request POST 'https://xxxx.com/api/v1/rerank' \
|
|
--header 'Authorization: Bearer {{ACCESS_TOKEN}}' \
|
|
--header 'Content-Type: application/json' \
|
|
--data-raw '{
|
|
"model": "bge-rerank-m3",
|
|
"query": "Who is the director",
|
|
"documents": [
|
|
"Who are you?\nI am the assistant of the movie 'Suzume'"
|
|
]
|
|
}'
|
|
```
|
|
</Tab>
|
|
<Tab value="TTS Model">
|
|
```bash
|
|
curl https://api.openai.com/v1/audio/speech \
|
|
-H "Authorization: Bearer $OPENAI_API_KEY" \
|
|
-H "Content-Type: application/json" \
|
|
-d '{
|
|
"model": "tts-1",
|
|
"input": "The quick brown fox jumped over the lazy dog.",
|
|
"voice": "alloy"
|
|
}' \
|
|
--output speech.mp3
|
|
```
|
|
</Tab>
|
|
<Tab value="Whisper Model">
|
|
```bash
|
|
curl https://api.openai.com/v1/audio/transcriptions \
|
|
-H "Authorization: Bearer $OPENAI_API_KEY" \
|
|
-H "Content-Type: multipart/form-data" \
|
|
-F file="@/path/to/file/audio.mp3" \
|
|
-F model="whisper-1"
|
|
```
|
|
</Tab>
|
|
</Tabs>
|
|
|
|
---
|
|
|
|
### (2) Error - Model response is empty/Model error
|
|
|
|
This error is due to the fact that under stream mode, oneapi directly ended the stream request and did not return any content.
|
|
|
|
Version 4.8.10 added error logs. When an error occurs, the actual Body parameters sent will be printed in the log. You can copy the parameters and send a request test to oneapi through curl.
|
|
|
|
Since oneapi cannot correctly capture errors in stream mode, sometimes you can set `stream=false` to get the exact error.
|
|
|
|
Possible error issues:
|
|
|
|
1. Domestic models hit risk control.
|
|
2. Unsupported model parameters: only keep messages and necessary parameters for testing, delete other parameters for testing.
|
|
|
|
---
|
|
|
|
### (3) "Current group upstream load is saturated, please try again later"
|
|
|
|
If you encounter this error (e.g. `request id:xxx`) in the logs or response, this is typically an OneAPI channel issue. Try switching to a different model or a different relay provider.
|
|
|
|
---
|
|
|
|
### (4) "Connection Error" in logs when using the API
|
|
|
|
Most likely the API key is pointing to OpenAI's endpoint, but the server is deployed in mainland China and can't reach overseas endpoints. Use a relay service or reverse proxy to resolve the connectivity issue.
|
|
|
|
---
|
|
|
|
### (5) Enable image indexing reports 400
|
|
|
|
You need to correctly configure the OCR model in `Admin` -> `System Configuration`.
|