mirror of
https://github.com/labring/FastGPT.git
synced 2026-04-25 02:01:53 +08:00
V4.14.10 dev (#6674)
* feat: model config with brand-new price calculate machanism (#6616) * fix: image read and json error (Agent) (#6502) * fix: 1.image read 2.JSON parsing error * dataset cite and pause * perf: plancall second parse * add test --------- Co-authored-by: archer <545436317@qq.com> * master message * remove invalid code * wip: model config * feat: model config with brand-new price calculate machanism * merge main branch * ajust calculate way * ajust priceTiers resolve procession * perf: price config code * fix: default price * fix: test * fix: comment * fix test --------- Co-authored-by: YeYuheng <57035043+YYH211@users.noreply.github.com> Co-authored-by: archer <545436317@qq.com> * wip: fix modal UI (#6634) * wip: fix modal UI * fix: maxInputToken set * chore: add price unit for non llm models * chore: replace question mark icon with beta tag (#6672) * feat:rerank too long; fix:rerank ui(agent),embedding returns 0 (#6663) * feat:rerank too long; fix:rerank ui(agent),embedding returns 0 * rerank * fix:rerank function * perf: rerank code * fix rerank * perf: model price ui --------- Co-authored-by: archer <545436317@qq.com> * remove llmtype field * revert model init * fix: filed * fix: model select filter * perf: multiple selector render * remove invalid checker * remove invalid i18n * perf: model selector tip * perf: model selector tip * fix cr * limit pnpm version * fix: i18n * fix action * set default mintoken * update i18n * perf: usage push * fix:rerank model ui (#6677) * fix: tier match error * fix: testr --------- Co-authored-by: Ryo <whoeverimf5@gmail.com> Co-authored-by: YeYuheng <57035043+YYH211@users.noreply.github.com>
This commit is contained in:
@@ -152,10 +152,6 @@ If you find it tedious to configure models through the UI, you can use a configu
|
||||
"charsPointsPrice": 0, // Credits per 1k tokens (commercial edition)
|
||||
"censor": false, // Enable content moderation (commercial edition)
|
||||
"vision": true, // Supports image input
|
||||
"datasetProcess": true, // Used as a text comprehension model (QA). At least one model must have this set to true, or Knowledge Base will error
|
||||
"usedInClassify": true, // Used for question classification (at least one must be true)
|
||||
"usedInExtractFields": true, // Used for content extraction (at least one must be true)
|
||||
"usedInToolCall": true, // Used for tool calls (at least one must be true)
|
||||
"toolChoice": true, // Supports tool selection (used in classification, extraction, and tool calls)
|
||||
"functionCall": false, // Supports function calling (used in classification, extraction, and tool calls). toolChoice takes priority; if false, functionCall is used; if also false, prompt mode is used
|
||||
"customCQPrompt": "", // Custom text classification prompt (for models without tool/function call support)
|
||||
|
||||
@@ -152,10 +152,6 @@ FastGPT 页面上提供了每类模型的简单测试,可以初步检查模型
|
||||
"charsPointsPrice": 0, // n积分/1k token(商业版)
|
||||
"censor": false, // 是否开启敏感校验(商业版)
|
||||
"vision": true, // 是否支持图片输入
|
||||
"datasetProcess": true, // 是否设置为文本理解模型(QA),务必保证至少有一个为true,否则知识库会报错
|
||||
"usedInClassify": true, // 是否用于问题分类(务必保证至少有一个为true)
|
||||
"usedInExtractFields": true, // 是否用于内容提取(务必保证至少有一个为true)
|
||||
"usedInToolCall": true, // 是否用于工具调用(务必保证至少有一个为true)
|
||||
"toolChoice": true, // 是否支持工具选择(分类,内容提取,工具调用会用到。)
|
||||
"functionCall": false, // 是否支持函数调用(分类,内容提取,工具调用会用到。会优先使用 toolChoice,如果为false,则使用 functionCall,如果仍为 false,则使用提示词模式)
|
||||
"customCQPrompt": "", // 自定义文本分类提示词(不支持工具和函数调用的模型
|
||||
|
||||
@@ -136,10 +136,6 @@ Add the qwen-chat model to the `llmModels` section of FastGPT's `config.json`:
|
||||
"charsPointsPrice": 0, // n points/1k tokens (Commercial Edition)
|
||||
"censor": false, // Enable content moderation (Commercial Edition)
|
||||
"vision": true, // Supports image input
|
||||
"datasetProcess": true, // Use as Knowledge Base processing model (QA). At least one model must be true, or Knowledge Base will error
|
||||
"usedInClassify": true, // Use for question classification (at least one must be true)
|
||||
"usedInExtractFields": true, // Use for content extraction (at least one must be true)
|
||||
"usedInToolCall": true, // Use for tool calling (at least one must be true)
|
||||
"toolChoice": true, // Supports tool choice (used in classification, extraction, tool calling)
|
||||
"functionCall": false, // Supports function calling (used in classification, extraction, tool calling. toolChoice takes priority; if false, falls back to functionCall; if still false, uses prompt mode)
|
||||
"customCQPrompt": "", // Custom classification prompt (for models without tool/function calling support)
|
||||
|
||||
@@ -136,10 +136,6 @@ curl --location --request POST 'https://[oneapi_url]/v1/chat/completions' \
|
||||
"charsPointsPrice": 0, // n积分/1k token(商业版)
|
||||
"censor": false, // 是否开启敏感校验(商业版)
|
||||
"vision": true, // 是否支持图片输入
|
||||
"datasetProcess": true, // 是否设置为知识库处理模型(QA),务必保证至少有一个为true,否则知识库会报错
|
||||
"usedInClassify": true, // 是否用于问题分类(务必保证至少有一个为true)
|
||||
"usedInExtractFields": true, // 是否用于内容提取(务必保证至少有一个为true)
|
||||
"usedInToolCall": true, // 是否用于工具调用(务必保证至少有一个为true)
|
||||
"toolChoice": true, // 是否支持工具选择(分类,内容提取,工具调用会用到。)
|
||||
"functionCall": false, // 是否支持函数调用(分类,内容提取,工具调用会用到。会优先使用 toolChoice,如果为false,则使用 functionCall,如果仍为 false,则使用提示词模式)
|
||||
"customCQPrompt": "", // 自定义文本分类提示词(不支持工具和函数调用的模型
|
||||
|
||||
@@ -13,12 +13,17 @@ description: 'FastGPT V4.14.10 更新说明'
|
||||
1. 增加 OpenSandbox docker 部署方案及适配,并支持通过挂载 volumn 进行数据持久化。
|
||||
2. 飞书发布渠道,支持流输出。
|
||||
3. 目录最大上限,可通过环境变量配置。
|
||||
4. rerank 模型上限配置,避免超出单条 document 上限导致 rerank 失败。
|
||||
5. 增加 LLM 梯度计量计费模式,同时统一计费推送方式。
|
||||
|
||||
## ⚙️ 优化
|
||||
|
||||
1. 工作流 runtime,减少计算复杂。
|
||||
2. 增加一些对于大变量的计算限制,避免计算复杂度过高导致线程阻塞。
|
||||
3. 移除模型配置里“用于知识库文件处理”、“用于问题分类”等配置,统一增加“测试模型“标志。测试模型会有特殊标识,并且仅可在 ai chat 中使用,其余场景将会过滤。
|
||||
|
||||
## 🐛 修复
|
||||
|
||||
1. 子工作流的全局变量默认值未生效。
|
||||
1. 子工作流的全局变量默认值未生效。
|
||||
2. agent 模式下已配的 rerank 模型不显示。
|
||||
3. bge-m3 embedding 向量模型输出都为 0 的问题。
|
||||
@@ -94,7 +94,6 @@ Copy the configuration below, click the import button in the top-right corner of
|
||||
],
|
||||
"label": "core.module.input.label.aiModel",
|
||||
"valueType": "string",
|
||||
"llmModelType": "all",
|
||||
"value": "gpt-3.5-turbo"
|
||||
},
|
||||
{
|
||||
|
||||
@@ -94,7 +94,6 @@ description: 利用工具调用模块,发送一个飞书webhook通知
|
||||
],
|
||||
"label": "core.module.input.label.aiModel",
|
||||
"valueType": "string",
|
||||
"llmModelType": "all",
|
||||
"value": "gpt-3.5-turbo"
|
||||
},
|
||||
{
|
||||
|
||||
@@ -208,7 +208,6 @@ Copy the configuration below, enter「Advanced Workflow」, select「Import Conf
|
||||
],
|
||||
"label": "core.module.input.label.aiModel",
|
||||
"valueType": "string",
|
||||
"llmModelType": "all",
|
||||
"value": "FastAI-plus"
|
||||
},
|
||||
{
|
||||
|
||||
@@ -208,7 +208,6 @@ export default async function (ctx: FunctionContext) {
|
||||
],
|
||||
"label": "core.module.input.label.aiModel",
|
||||
"valueType": "string",
|
||||
"llmModelType": "all",
|
||||
"value": "FastAI-plus"
|
||||
},
|
||||
{
|
||||
@@ -1150,7 +1149,6 @@ export default async function (ctx: FunctionContext) {
|
||||
"label": "core.module.input.label.aiModel",
|
||||
"required": true,
|
||||
"valueType": "string",
|
||||
"llmModelType": "extractFields",
|
||||
"value": "gpt-3.5-turbo"
|
||||
},
|
||||
{
|
||||
|
||||
@@ -453,7 +453,6 @@ Copy and import directly into FastGPT.
|
||||
],
|
||||
"label": "core.module.input.label.aiModel",
|
||||
"valueType": "string",
|
||||
"llmModelType": "all",
|
||||
"value": "gpt-3.5-turbo"
|
||||
},
|
||||
{
|
||||
|
||||
@@ -453,7 +453,6 @@ HTTP模块中,需要设置 3 个工具参数:
|
||||
],
|
||||
"label": "core.module.input.label.aiModel",
|
||||
"valueType": "string",
|
||||
"llmModelType": "all",
|
||||
"value": "gpt-3.5-turbo"
|
||||
},
|
||||
{
|
||||
|
||||
@@ -147,8 +147,8 @@
|
||||
"document/content/docs/openapi/share.mdx": "2026-02-12T18:45:30+08:00",
|
||||
"document/content/docs/self-host/config/json.en.mdx": "2026-03-03T17:39:47+08:00",
|
||||
"document/content/docs/self-host/config/json.mdx": "2026-03-03T17:39:47+08:00",
|
||||
"document/content/docs/self-host/config/model/intro.en.mdx": "2026-03-19T14:09:03+08:00",
|
||||
"document/content/docs/self-host/config/model/intro.mdx": "2026-03-19T14:09:03+08:00",
|
||||
"document/content/docs/self-host/config/model/intro.en.mdx": "2026-03-24T23:37:00+08:00",
|
||||
"document/content/docs/self-host/config/model/intro.mdx": "2026-03-24T23:37:00+08:00",
|
||||
"document/content/docs/self-host/config/model/minimax.en.mdx": "2026-03-19T09:32:57-05:00",
|
||||
"document/content/docs/self-host/config/model/minimax.mdx": "2026-03-19T09:32:57-05:00",
|
||||
"document/content/docs/self-host/config/model/siliconCloud.en.mdx": "2026-03-19T14:09:03+08:00",
|
||||
@@ -171,8 +171,8 @@
|
||||
"document/content/docs/self-host/custom-models/mineru.mdx": "2026-03-03T17:39:47+08:00",
|
||||
"document/content/docs/self-host/custom-models/ollama.en.mdx": "2026-03-03T17:39:47+08:00",
|
||||
"document/content/docs/self-host/custom-models/ollama.mdx": "2026-03-03T17:39:47+08:00",
|
||||
"document/content/docs/self-host/custom-models/xinference.en.mdx": "2026-03-03T17:39:47+08:00",
|
||||
"document/content/docs/self-host/custom-models/xinference.mdx": "2026-03-03T17:39:47+08:00",
|
||||
"document/content/docs/self-host/custom-models/xinference.en.mdx": "2026-03-24T23:37:00+08:00",
|
||||
"document/content/docs/self-host/custom-models/xinference.mdx": "2026-03-24T23:37:00+08:00",
|
||||
"document/content/docs/self-host/deploy/docker.en.mdx": "2026-03-19T14:09:03+08:00",
|
||||
"document/content/docs/self-host/deploy/docker.mdx": "2026-03-19T14:09:03+08:00",
|
||||
"document/content/docs/self-host/deploy/sealos.en.mdx": "2026-03-03T17:39:47+08:00",
|
||||
@@ -220,7 +220,7 @@
|
||||
"document/content/docs/self-host/upgrading/4-14/4140.mdx": "2026-03-03T17:39:47+08:00",
|
||||
"document/content/docs/self-host/upgrading/4-14/4141.en.mdx": "2026-03-03T17:39:47+08:00",
|
||||
"document/content/docs/self-host/upgrading/4-14/4141.mdx": "2026-03-03T17:39:47+08:00",
|
||||
"document/content/docs/self-host/upgrading/4-14/41410.mdx": "2026-03-27T12:01:02+08:00",
|
||||
"document/content/docs/self-host/upgrading/4-14/41410.mdx": "2026-03-28T17:10:23+08:00",
|
||||
"document/content/docs/self-host/upgrading/4-14/4142.en.mdx": "2026-03-03T17:39:47+08:00",
|
||||
"document/content/docs/self-host/upgrading/4-14/4142.mdx": "2026-03-03T17:39:47+08:00",
|
||||
"document/content/docs/self-host/upgrading/4-14/4143.en.mdx": "2026-03-03T17:39:47+08:00",
|
||||
@@ -387,14 +387,14 @@
|
||||
"document/content/docs/use-cases/app-cases/dalle3.mdx": "2025-07-23T21:35:03+08:00",
|
||||
"document/content/docs/use-cases/app-cases/english_essay_correction_bot.en.mdx": "2026-02-26T22:14:30+08:00",
|
||||
"document/content/docs/use-cases/app-cases/english_essay_correction_bot.mdx": "2025-07-23T21:35:03+08:00",
|
||||
"document/content/docs/use-cases/app-cases/feishu_webhook.en.mdx": "2026-02-26T22:14:30+08:00",
|
||||
"document/content/docs/use-cases/app-cases/feishu_webhook.mdx": "2025-07-23T21:35:03+08:00",
|
||||
"document/content/docs/use-cases/app-cases/feishu_webhook.en.mdx": "2026-03-28T17:14:28+08:00",
|
||||
"document/content/docs/use-cases/app-cases/feishu_webhook.mdx": "2026-03-28T17:14:28+08:00",
|
||||
"document/content/docs/use-cases/app-cases/fixingEvidence.en.mdx": "2026-02-26T22:14:30+08:00",
|
||||
"document/content/docs/use-cases/app-cases/fixingEvidence.mdx": "2025-07-23T21:35:03+08:00",
|
||||
"document/content/docs/use-cases/app-cases/google_search.en.mdx": "2026-02-26T22:14:30+08:00",
|
||||
"document/content/docs/use-cases/app-cases/google_search.mdx": "2025-07-23T21:35:03+08:00",
|
||||
"document/content/docs/use-cases/app-cases/lab_appointment.en.mdx": "2026-02-26T22:14:30+08:00",
|
||||
"document/content/docs/use-cases/app-cases/lab_appointment.mdx": "2025-12-10T20:07:05+08:00",
|
||||
"document/content/docs/use-cases/app-cases/google_search.en.mdx": "2026-03-28T17:14:28+08:00",
|
||||
"document/content/docs/use-cases/app-cases/google_search.mdx": "2026-03-28T17:14:28+08:00",
|
||||
"document/content/docs/use-cases/app-cases/lab_appointment.en.mdx": "2026-03-28T17:10:23+08:00",
|
||||
"document/content/docs/use-cases/app-cases/lab_appointment.mdx": "2026-03-28T17:10:23+08:00",
|
||||
"document/content/docs/use-cases/app-cases/multi_turn_translation_bot.en.mdx": "2026-02-26T22:14:30+08:00",
|
||||
"document/content/docs/use-cases/app-cases/multi_turn_translation_bot.mdx": "2025-07-23T21:35:03+08:00",
|
||||
"document/content/docs/use-cases/app-cases/submit_application_template.en.mdx": "2026-03-03T17:39:47+08:00",
|
||||
|
||||
Reference in New Issue
Block a user