flow moduoles (#161)

* flow intro

* docs:flow modules

* docs:flow modules
This commit is contained in:
Archer
2023-08-09 18:07:58 +08:00
committed by GitHub
parent b6f9f77ed4
commit 657d0ad374
52 changed files with 485 additions and 56 deletions

View File

@@ -5,8 +5,8 @@
"show_appStore": false,
"show_userDetail": false,
"show_git": true,
"systemTitle": "FastAI",
"authorText": "Made by FastAI Team.",
"systemTitle": "FastGPT",
"authorText": "Made by FastGPT Team.",
"gitLoginKey": ""
},
"SystemParams": {
@@ -19,45 +19,45 @@
"ChatModels": [
{
"model": "gpt-3.5-turbo",
"name": "FastAI-4k",
"name": "GPT35-4k",
"contextMaxToken": 4000,
"quoteMaxToken": 2000,
"maxTemperature": 1.2,
"price": 1.5,
"price": 0,
"defaultSystem": ""
},
{
"model": "gpt-3.5-turbo-16k",
"name": "FastAI-16k",
"name": "GPT35-16k",
"contextMaxToken": 16000,
"quoteMaxToken": 8000,
"maxTemperature": 1.2,
"price": 3,
"price": 0,
"defaultSystem": ""
},
{
"model": "gpt-4",
"name": "FastAI-Plus",
"name": "GPT4-8k",
"contextMaxToken": 8000,
"quoteMaxToken": 4000,
"maxTemperature": 1.2,
"price": 45,
"price": 0,
"defaultSystem": ""
}
],
"QAModels": [
{
"model": "gpt-3.5-turbo-16k",
"name": "FastAI-16k",
"name": "GPT35-16k",
"maxToken": 16000,
"price": 3
"price": 0
}
],
"VectorModels": [
{
"model": "text-embedding-ada-002",
"name": "Embedding-2",
"price": 0.2
"price": 0
}
]
}

View File

@@ -40,48 +40,48 @@ const defaultFeConfigs = {
show_appStore: false,
show_userDetail: false,
show_git: true,
systemTitle: 'FastAI',
authorText: 'Made by FastAI Team.'
systemTitle: 'FastGPT',
authorText: 'Made by FastGPT Team.'
};
const defaultChatModels = [
{
model: 'gpt-3.5-turbo',
name: 'FastAI-4k',
name: 'GPT35-4k',
contextMaxToken: 4000,
quoteMaxToken: 2400,
maxTemperature: 1.2,
price: 1.5
price: 0
},
{
model: 'gpt-3.5-turbo-16k',
name: 'FastAI-16k',
name: 'GPT35-16k',
contextMaxToken: 16000,
quoteMaxToken: 8000,
maxTemperature: 1.2,
price: 3
price: 0
},
{
model: 'gpt-4',
name: 'FastAI-Plus',
name: 'GPT4-8k',
contextMaxToken: 8000,
quoteMaxToken: 4000,
maxTemperature: 1.2,
price: 45
price: 0
}
];
const defaultQAModels = [
{
model: 'gpt-3.5-turbo-16k',
name: 'FastAI-16k',
name: 'GPT35-16k',
maxToken: 16000,
price: 3
price: 0
}
];
const defaultVectorModels = [
{
model: 'text-embedding-ada-002',
name: 'Embedding-2',
price: 0.2
price: 0
}
];

View File

@@ -2,7 +2,7 @@
sidebar_position: 2
---
# Other Chat Model Configuration
# Config Chat Model
By default, FastGPT is only configured with 3 models of GPT. If you need to integrate other models, you need to do some additional configuration.

View File

@@ -0,0 +1 @@
# Wait for completion

Binary file not shown.

Before

Width:  |  Height:  |  Size: 413 KiB

After

Width:  |  Height:  |  Size: 437 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 273 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 329 KiB

View File

@@ -2,22 +2,68 @@
sidebar_position: 1
---
# 模块编排介绍
# Quick Start
FastGpt V4 后将采用新的交互方式来构建 AI 应用。使用了“节点”编排的方式去掉原先的表单方式。提高可玩性和扩展性的同时也提高了上手的门槛,这篇文章就来简单介绍一下 “预览版” 的模块编排基本使用方法。
Starting from FastGpt V4, a new interactive way is introduced to build AI applications. It uses Flow node orchestration to implement complex workflows, improving playability and scalability. However, this also increases the learning curve, and users with some development background may find it easier to use.
This article provides a brief introduction to the basics of module orchestration. Each module will be explained in detail in separate chapters.
![](./imgs/intro1.png)
![模块](./imgs/intro1.png)
## What is a Module?
预览版仅包含了 8 个模块,你可以利用它们来完全实现 V3 的知识库功能。此外,预览版还加入了问题分类模块,可以实现多路线任务。
In programming, a module can be understood as a function or interface. It can be seen as a **step**. By connecting multiple modules together, you can gradually achieve the final AI output.
In the following diagram, we have a simple AI conversation. It consists of a user input question, chat records, and an AI conversation module.
![](./imgs/intro2.png)
The workflow is as follows:
## 基础知识
1. After the user inputs a question, a request is sent to the server with the question, resulting in an output from the "User Question" module.
2. The "Chat Records" module retrieves the number of records from the database based on the set "Max Record Count", resulting in an output.
After the above two steps, we obtain the results of the two blue dots on the left. The results are injected into the "AI" conversation module on the right.
3. The AI conversation module uses the chat records and user question as inputs to call the conversation API and generate a response. (The conversation result output is hidden by default and will be sent to the client whenever the conversation module is triggered)
### 什么是模块
### Module Categories
在程序中,模块可以理解为一个个 function 或者接口。对于非技术背景同学,可以理解为它就是一个**步骤**。将多个模块一个个拼接起来,即可一步步的去实现最终的 AI 输出。
In terms of functionality, modules can be divided into 3 categories:
### 如何阅读和理解
1. Read-only modules: global variables, user prompts
2. System modules: chat records (no input, directly retrieved from the database), user question (workflow entry)
3. Function modules: knowledge base search, AI conversation, and other remaining modules (these modules have both input and output and can be freely combined)
1. 建议从左往右阅读。
2.**用户问题** 模块开始。用户问题模块,代表的是用户发送了一段文本,触发任务开始。
3.
### Module Components
Each module consists of 3 core parts: fixed parameters, external inputs (represented by a circle on the left), and outputs (represented by a circle on the right).
For read-only modules, you only need to fill in the prompts and they do not participate in the workflow execution.
For system modules, usually only fixed parameters and outputs are present, and the focus is on where the output is directed.
For function modules, all 3 parts are important. Taking the AI conversation module in the following diagram as an example:
![](./imgs/intro3.png)
The dialogue model, temperature, reply limit, system prompts, and restricted words are fixed parameters. The system prompts and restricted words can also be used as external inputs, which means that if you have an input flow to the system prompts, the originally filled content will be **overwritten**.
The triggers, referenced content, chat records, and user question are external inputs that need to flow in from the outputs of other modules.
The reply end is the output of this module.
### When are Modules Executed?
Remember the principles:
1. Only **connected** external inputs matter, i.e., the circles on the left are connected.
2. Execution is triggered when all connected inputs have values.
#### Example 1:
The chat records module is automatically executed, so the input for chat records is automatically assigned a value. When the user sends a question, the "User Question" module outputs a value, and at this point, the user question input of the "AI Conversation" module is also assigned a value. After both connected inputs have values, the "AI Conversation" module is executed.
![](./imgs/intro1.png)
#### Example 2:
The following diagram shows an example of a knowledge base search.
1. The chat history flows into the "AI" conversation module.
2. The user's question flows into both the "Knowledge Base Search" and "AI Conversation" modules. Since the triggers and referenced content of the "AI Conversation" module are still empty, it will not be executed at this point.
3. The "Knowledge Base Search" module has only one external input, and it is assigned a value, so it starts executing.
4. When the "Knowledge Base Search" result is empty, the value of "Search Result Not Empty" is empty and will not be output. Therefore, the "AI Conversation" module cannot be executed due to the triggers not being assigned a value. However, "Search Result Empty" has an output and flows to the triggers of the specified reply module, so the "Specified Reply" module outputs a response.
5. When the "Knowledge Base Search" result is not empty, both "Search Result Not Empty" and "Referenced Content" have outputs, which flow into the "AI Conversation" module. At this point, all 4 external inputs of the "AI Conversation" module are assigned values, and it starts executing.
![](./imgs/intro4.png)
## How to Read?
1. It is recommended to read from left to right.
2. Start with the "User Question" module. The user question module represents a user sending a piece of text, triggering the task.
3. Pay attention to the "AI Conversation" and "Specified Reply" modules, as these are the places where the answers are output.

View File

@@ -0,0 +1,62 @@
# AI Chat
- Repeatable addition (to prevent messy lines in complex arrangements and make it more visually appealing)
- External input available
- Static configuration available
- Trigger execution
- Core module
![](./imgs/aichat.png)
## Parameter Description
### Chat Model
You can configure the optional chat models through [data/config.json](/docs/develop/data_config/chat_models) and implement multi-model access through [OneAPI](http://localhost:3000/docs/develop/oneapi).
### Temperature & Reply Limit
Temperature: The lower the temperature, the more precise the answer and the less unnecessary words (tested, but the difference doesn't seem significant).
Reply Limit: Maximum number of reply tokens (only applicable to OpenAI models). Note that this is the reply, not the total tokens.
### System Prompt (can be overridden by external input)
Placed at the beginning of the context array with the role as system, used to guide the model. Refer to the tutorials of various search engines for specific usage~
### Constraint Words (can be overridden by external input)
Similar to system prompts, the role is also system type, but the position is placed before the question, with a stronger guiding effect.
### Quoted Content
Receives an array of external input, mainly generated by the "Knowledge Base Search" module, and can also be imported from external sources through the Http module. The data structure example is as follows:
```ts
type DataType = {
kb_id?: string;
id?: string;
q: string;
a: string;
source?: string;
};
// If it is externally imported content, try not to carry kb_id and id
const quoteList: DataType[] = [
{ kb_id: '11', id: '222', q: '你还', a: '哈哈', source: '' },
{ kb_id: '11', id: '333', q: '你还', a: '哈哈', source: '' },
{ kb_id: '11', id: '444', q: '你还', a: '哈哈', source: '' }
];
```
## Complete Context Composition
The data sent to the LLM model in the end is an array, with the content and order as follows:
```
[
System Prompt
Quoted Content
Chat History
Constraint Words
Question
]
```

View File

@@ -0,0 +1,9 @@
# User Guide
- Only one can be added
- No external input
- Not involved in actual scheduling
As shown in the image, you can provide some guidance to the user before asking questions. You can also set a guiding question.
![](./imgs/guide.png)

View File

@@ -0,0 +1,10 @@
# History
- Can be repeated (to prevent messy lines when complex arrangements are made, for a more aesthetic appearance)
- No external input
- Entry point of the process
- Automatic execution
During each conversation, up to n chat records will be retrieved from the database as context. Note that this does not refer to a maximum of n context records for the current round of conversation, as the current round of conversation also includes: prompts, qualifiers, referenced content, and questions.
![](./imgs/history.png)

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 228 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 115 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 285 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

View File

@@ -0,0 +1,11 @@
# Special Reply
- Can be added repeatedly (to prevent messy lines in complex arrangements and make it more visually appealing)
- Can be manually inputted
- Can be externally inputted
- Will output results to the client
The special reply module is usually used for replying to specific states. Of course, you can also implement some fancy operations like in Figure 2. The triggering logic is very simple. One way is to write the reply content and trigger it through a trigger. Another way is to not write the reply content and directly trigger it through external input, and reply with the inputted content.
![Figure 1](./imgs/specialreply.png)
![Figure 2](./imgs/specialreply2.png)

View File

@@ -0,0 +1,20 @@
---
sidebar_position: 1
---
# Introduction to Triggers
Observant students may notice that there is an external input called "Trigger" in each functional module, and it is of type "any".
Its **core function** is to control the timing of module execution. Let's take the "AI Dialogue" module in the two knowledge base search examples below as an example:
| Figure 1 | Figure 2 |
| ---------------------------- | ---------------------------- |
| ![Demo](./imgs/trigger1.png) | ![Demo](./imgs/trigger2.png) |
In the "Knowledge Base Search" module, since the referenced content always has an output, the "Referenced Content" input of the "AI Dialogue" module will always be assigned a value, regardless of whether the content is found or not. If the trigger is not connected (Figure 2), the "AI Dialogue" module will always be executed after the search is completed.
Sometimes, you may want to perform additional processing when there is an empty search, such as replying with fixed content, calling another GPT with different prompts, or sending an HTTP request... In this case, you need to use a trigger and connect the **search result is not empty** with the **trigger**.
When the search result is empty, the "Knowledge Base Search" module will not output the result of **search result is not empty**, so the trigger of the "AI Dialogue" module will always be empty and it will not be executed.
In summary, by understanding the logic of module execution, you can use triggers flexibly:
**Execute when all external input fields (those with connections) are assigned values**.

View File

@@ -0,0 +1,8 @@
# User Questions
- Repeated Addition (to prevent messy lines and improve visual aesthetics in complex workflows)
- No external input
- Flow entry point
- Automatic execution
![](./imgs/chatinput.png)

View File

@@ -0,0 +1,19 @@
---
sidebar_position: 2
---
# Global Variables
- Only one can be added
- Manually configured
- Affects other modules
- Can be used for user guidance
You can set some questions before the conversation starts, allowing users to input or select their answers, and inject the results into other modules. Currently, it can only be injected into string type data (represented by a blue circle).
In the example below, two variables are defined: "Target Language" and "Dropdown Test (Ignore)". Users will be asked to fill in the target language before the conversation starts. With user guidance, we can build a simple translation bot. The key of "Target Language" (language) is written into the qualifiers of the "AI Dialogue" module.
![](./imgs/variable.png)
By examining the complete conversation log, we can see that the actual qualifier changes from "Translate my question directly into {{language}}" to "Translate my question directly into English" because {{language}} is replaced by the variable.
![](./imgs/variable2.png)

View File

@@ -20,6 +20,12 @@
"sidebar.docSidebar.category.Flow Modules": {
"message": "高级编排"
},
"sidebar.docSidebar.category.Modules Intro": {
"message": "模块介绍"
},
"sidebar.docSidebar.category.Examples": {
"message": "例子"
},
"sidebar.docSidebar.category.Other": {
"message": "其他"
}

View File

@@ -2,7 +2,7 @@
sidebar_position: 2
---
# 其他对话模型配置
# 配置其他对话模型
默认情况下FastGPT 只配置了 GPT 的 3 个模型,如果你需要接入其他模型,需要进行一些额外配置。

Binary file not shown.

Before

Width:  |  Height:  |  Size: 413 KiB

After

Width:  |  Height:  |  Size: 437 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 273 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 92 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 329 KiB

View File

@@ -1,19 +1,83 @@
# 介绍(待完成)
---
sidebar_position: 1
---
FastGpt V4 后将采用新的交互方式来构建 AI 应用。使用了“节点”编排的方式去掉原先的表单方式。提高可玩性和扩展性的同时也提高了上手的门槛,这篇文章就来简单介绍一下 “预览版” 的模块编排基本使用方法。
# 快速了解
![模块](./imgs/intro1.png)
FastGpt V4 后将采用新的交互方式来构建 AI 应用。使用了 Flow 节点编排的方式来实现复杂工作流,提高可玩性和扩展性。但同时也提高了上手的门槛,有一定开发背景的用户使用起来会比较容易。
预览版仅包含了 8 个模块,你可以利用它们来完全实现 V3 的知识库功能。此外,预览版还加入了问题分类模块,可以实现多路线任务
这篇文章就来简单介绍一下模块编排基本内容。每个模块的详解会单独分出一章
## 基础知识
![](./imgs/intro1.png)
### 什么是模块
## 什么是模块
在程序中,模块可以理解为一个个 function 或者接口。对于非技术背景同学,可以理解为它就是一个**步骤**。将多个模块一个个拼接起来,即可一步步的去实现最终的 AI 输出。
在程序中,模块可以理解为一个个 function 或者接口。可以理解为它就是一个**步骤**。将多个模块一个个拼接起来,即可一步步的去实现最终的 AI 输出。
### 如何阅读和理解
如下图,是一个最简单的 AI 对话。它由用户输入的问题、聊天记录以及 AI 对话模块组成。
![](./imgs/intro2.png)
运行的流程如下:
1. 用户输入问题后,会向服务器发送一个请求,并携带问题。从而得到【用户问题】模块的一个输出。
2. 根据设置的【最长记录数】来进行获取数据库中的记录数,从而得到【聊天记录】模块的输出。
经过上面两个流程就得到了左侧两个蓝色点的结果。结果会被注入到右侧的【AI】对话模块。
3. AI 对话模块根据传入的聊天记录和用户问题,调用对话接口,从而实现回答。(这里的对话结果输出隐藏了起来,默认只要触发了对话模块,就会往客户端输出内容)
### 模块分类
从功能上,可以分为 3 类:
1. 仅读模块:全局变量、用户引导
2. 系统模块:聊天记录(无输入,直接从数据库取)、用户问题(流程入口)
3. 功能模块知识库搜索、AI 对话等剩余模块。(这些模块都有输入和输出,可以自由组合)
### 模块的组成
每个模块会包含 3 个核心部分:固定参数、外部输入(左边有个圆圈)和输出(右边有个圆圈)。
对于仅读模块,只需要根据提示填写即可,不参与流程运行。
对于系统模块,通常只有固定参数和输出,主要需要关注输出到哪个位置。
对于功能模块,通常这 3 部分都是重要的,以下图的 AI 对话为例
![](./imgs/intro3.png)
- 对话模型、温度、回复上限、系统提示词和限定词为固定参数,同时系统提示词和限定词也可以作为外部输入,意味着如果你有输入流向了系统提示词,那么原本填写的内容就会被**覆盖**。
- 触发器、引用内容、聊天记录和用户问题则为外部输入,需要从其他模块的输出流入。
- 回复结束则为该模块的输出。
### 模块什么时候执行?
记住原则:
1. 仅关心**已连接的**外部输入,即左边的圆圈被连接了。
2. 当连接内容都有值时触发。
#### 例子 1
聊天记录模块会自动执行因此聊天记录输入会自动赋值。当用户发送问题时【用户问题】模块会输出值此时【AI 对话】模块的用户问题输入也会被赋值。两个连接的输入都被赋值后,会执行 【AI 对话】模块。
![](./imgs/intro1.png)
#### 例子 2
下图是一个知识库搜索例子。
1. 历史记录会流入【AI】对话模块。
2. 用户的问题会流入【知识库搜索】和【AI 对话】模块由于【AI 对话】模块的触发器和引用内容还是空,此时不会执行。
3. 【知识库搜索】模块仅一个外部输入,并且被赋值,开始执行。
4. 【知识库搜索】结果为空时“搜索结果不为空”的值为空不会输出因此【AI 对话】模块会因为触发器没有赋值而无法执行。而“搜索结果为空”会有输出,流向指定回复的触发器,因此【指定回复】模块进行输出。
5. 【知识库搜索】结果不为空时“搜索结果不为空”和“引用内容”都有输出会流向【AI 对话】此时【AI 对话】的 4 个外部输入都被赋值,开始执行。
![](./imgs/intro4.png)
## 如何阅读?
1. 建议从左往右阅读。
2.**用户问题** 模块开始。用户问题模块,代表的是用户发送了一段文本,触发任务开始。
3.
3. 关注 AI 对话和指定回复模块,这两个模块是输出答案的地方。

View File

@@ -0,0 +1,64 @@
# AI 对话
- 可重复添加(复杂编排时候防止线太乱,可以更美观)
- 有外部输入
- 有静态配置
- 触发执行
- 核心模块
![](./imgs/aichat.png)
## 参数说明
### 对话模型
可以通过 [data/config.json](/docs/develop/data_config/chat_models) 配置可选的对话模型,通过 [OneAPI](http://localhost:3000/docs/develop/oneapi) 来实现多模型接入。
### 温度 & 回复上限
温度:越低回答越严谨,少废话(实测下来,感觉差别不大)
回复上限:最大回复 token 数量(只有 OpenAI 模型有效)。注意,是回复!不是总 tokens。
### 系统提示词(可被外部输入覆盖)
被防止在上下文数组的最前面role 为 system用于引导模型。具体用法参考各搜索引擎的教程~
### 限定词(可被外部输入覆盖)
与系统提示词类似role 也是 system 类型,只不过位置会被放置在问题前,拥有更强的引导作用。
### 引用内容
接收一个外部输入的数组,主要是由【知识库搜索】模块生成,也可以由 Http 模块从外部引入。数据结构例子如下:
```ts
type DataType = {
kb_id?: string;
id?: string;
q: string;
a: string;
source?: string;
};
// 如果是外部引入的内容,尽量不要携带 kb_id 和 id
const quoteList: DataType[] = [
{ kb_id: '11', id: '222', q: '你还', a: '哈哈', source: '' },
{ kb_id: '11', id: '333', q: '你还', a: '哈哈', source: '' },
{ kb_id: '11', id: '444', q: '你还', a: '哈哈', source: '' }
];
```
## 完整上下文组成
最终发送给 LLM 大模型的数据是一个数组,内容和顺序如下:
```
[
系统提示词
引用内容
聊天记录
限定词
问题
]
```

View File

@@ -0,0 +1,9 @@
# 用户引导
- 仅可添加 1 个
- 无外部输入
- 不参与实际调度
如图,可以在用户提问前给予一定引导。并可以设置引导问题。
![](./imgs/guide.png)

View File

@@ -0,0 +1,10 @@
# 历史记录
- 可重复添加(复杂编排时候防止线太乱,可以更美观)
- 无外部输入
- 流程入口
- 自动执行
每次对话时,会从数据库取最多 n 条聊天记录作为上下文。注意,不是指本轮对话最多 n 条上下文,本轮对话还包括:提示词、限定词、引用内容和问题。
![](./imgs/history.png)

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 228 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 115 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 285 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

View File

@@ -0,0 +1,16 @@
# 指定回复
- 可重复添加(复杂编排时候防止线太乱,可以更美观)
- 可手动输入
- 可外部输入
- 会输出结果给客户端
制定回复模块通常用户特殊状态回复,当然你也可以像图 2 一样,实现一些比较骚的操作~ 触发逻辑非常简单,一种是写好回复内容,通过触发器触发;一种是不写回复内容,直接由外部输入触发,并回复输入的内容。
![](./imgs/specialreply.png)
图 1
![](./imgs/specialreply2.png)
图 2

View File

@@ -0,0 +1,23 @@
---
sidebar_position: 1
---
# 触发器介绍
细心的同学可以发现,在每个功能模块里都会有一个叫【触发器】的外部输入,并且是 any 类型。
它的**核心作用**就是控制模块的执行时机,以下图 2 个知识库搜索中的【AI 对话】模块为例子:
| 图 1 | 图 2 |
| ---------------------------- | ---------------------------- |
| ![Demo](./imgs/trigger1.png) | ![Demo](./imgs/trigger2.png) |
【知识库搜索】模块中,由于**引用内容**始终会有输出会导致【AI 对话】模块的**引用内容**输入无论有没有搜到内容都会被赋值。如果此时不连接触发器(图 2在搜索结束后必定会执行【AI 对话】模块。
有时候,你可能希望空搜索时候进行额外处理,例如:回复固定内容、调用其他提示词的 GPT、发送一个 HTTP 请求…… 此时就需要用到触发器,需要将 **搜索结果不为空****触发器** 连接起来。
当搜索结果为空时,【知识库搜索】模块不会输出 **搜索结果不为空** 的结果,因此 【AI 对话】 模块的触发器始终为空,便不会执行。
总之,记住模块执行的逻辑就可以灵活的使用触发器:
**外部输入字段(有连接的才有效)全部被赋值时候执行**

View File

@@ -0,0 +1,8 @@
# 用户问题
- 可重复添加(复杂编排时候防止线太乱,可以更美观)
- 无外部输入
- 流程入口
- 自动执行
![](./imgs/chatinput.png)

View File

@@ -0,0 +1,22 @@
---
sidebar_position: 2
---
# 全局变量
- 仅可添加 1 个
- 手动配置
- 对其他模块有影响
- 可作为用户引导
可以在对话前设置一些问题,让用户输入或选择,并将用户输入/选择的结果注入到其他模块中。目前仅会注入到 string 类型的数据里(对应蓝色的圆圈)。
如下图,定义了两个变量:目标语言和下拉框测试(忽略)
用户在对话前会被要求先填写目标语言,配合用户引导,我们就构建了一个简单的翻译机器人。**目标语言**的 keylanguage 被写入到【AI 对话】模块的限定词里。
![](./imgs/variable.png)
通过完整对话记录我们可以看到,实际的限定词从:“将我的问题直接翻译成{{language}}” 变成了 “将我的问题直接翻译成英语”,因为 {{language}} 被变量替换了。
![](./imgs/variable2.png)

View File

@@ -10,7 +10,7 @@ const sidebars = {
link: {
type: 'generated-index'
},
collapsed: false,
collapsed: true,
items: [
{
type: 'category',
@@ -32,6 +32,7 @@ const sidebars = {
link: {
type: 'generated-index'
},
collapsed: false,
items: [
{
type: 'autogenerated',
@@ -42,7 +43,6 @@ const sidebars = {
{
type: 'category',
label: 'Deploy',
collapsed: false,
link: {
type: 'generated-index'
},
@@ -56,15 +56,6 @@ const sidebars = {
'develop/oneapi'
]
},
{
type: 'category',
label: 'Deploy',
link: {
type: 'generated-index'
},
collapsed: false,
items: [{ type: 'autogenerated', dirName: 'deploy' }]
},
{
type: 'category',
label: 'Datasets',
@@ -80,7 +71,36 @@ const sidebars = {
link: {
type: 'generated-index'
},
items: [{ type: 'autogenerated', dirName: 'flow-modules' }]
collapsed: false,
items: [
'flow-modules/intro',
{
type: 'category',
label: 'Modules Intro',
link: {
type: 'generated-index'
},
items: [
{
type: 'autogenerated',
dirName: 'flow-modules/modules'
}
]
},
{
type: 'category',
label: 'Examples',
link: {
type: 'generated-index'
},
items: [
{
type: 'autogenerated',
dirName: 'flow-modules/examples'
}
]
}
]
},
{
type: 'category',