flow moduoles (#161)

* flow intro

* docs:flow modules

* docs:flow modules
This commit is contained in:
Archer
2023-08-09 18:07:58 +08:00
committed by GitHub
parent b6f9f77ed4
commit 657d0ad374
52 changed files with 485 additions and 56 deletions

View File

@@ -0,0 +1,62 @@
# AI Chat
- Repeatable addition (to prevent messy lines in complex arrangements and make it more visually appealing)
- External input available
- Static configuration available
- Trigger execution
- Core module
![](./imgs/aichat.png)
## Parameter Description
### Chat Model
You can configure the optional chat models through [data/config.json](/docs/develop/data_config/chat_models) and implement multi-model access through [OneAPI](http://localhost:3000/docs/develop/oneapi).
### Temperature & Reply Limit
Temperature: The lower the temperature, the more precise the answer and the less unnecessary words (tested, but the difference doesn't seem significant).
Reply Limit: Maximum number of reply tokens (only applicable to OpenAI models). Note that this is the reply, not the total tokens.
### System Prompt (can be overridden by external input)
Placed at the beginning of the context array with the role as system, used to guide the model. Refer to the tutorials of various search engines for specific usage~
### Constraint Words (can be overridden by external input)
Similar to system prompts, the role is also system type, but the position is placed before the question, with a stronger guiding effect.
### Quoted Content
Receives an array of external input, mainly generated by the "Knowledge Base Search" module, and can also be imported from external sources through the Http module. The data structure example is as follows:
```ts
type DataType = {
kb_id?: string;
id?: string;
q: string;
a: string;
source?: string;
};
// If it is externally imported content, try not to carry kb_id and id
const quoteList: DataType[] = [
{ kb_id: '11', id: '222', q: '你还', a: '哈哈', source: '' },
{ kb_id: '11', id: '333', q: '你还', a: '哈哈', source: '' },
{ kb_id: '11', id: '444', q: '你还', a: '哈哈', source: '' }
];
```
## Complete Context Composition
The data sent to the LLM model in the end is an array, with the content and order as follows:
```
[
System Prompt
Quoted Content
Chat History
Constraint Words
Question
]
```

View File

@@ -0,0 +1,9 @@
# User Guide
- Only one can be added
- No external input
- Not involved in actual scheduling
As shown in the image, you can provide some guidance to the user before asking questions. You can also set a guiding question.
![](./imgs/guide.png)

View File

@@ -0,0 +1,10 @@
# History
- Can be repeated (to prevent messy lines when complex arrangements are made, for a more aesthetic appearance)
- No external input
- Entry point of the process
- Automatic execution
During each conversation, up to n chat records will be retrieved from the database as context. Note that this does not refer to a maximum of n context records for the current round of conversation, as the current round of conversation also includes: prompts, qualifiers, referenced content, and questions.
![](./imgs/history.png)

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 228 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 115 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 285 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

View File

@@ -0,0 +1,11 @@
# Special Reply
- Can be added repeatedly (to prevent messy lines in complex arrangements and make it more visually appealing)
- Can be manually inputted
- Can be externally inputted
- Will output results to the client
The special reply module is usually used for replying to specific states. Of course, you can also implement some fancy operations like in Figure 2. The triggering logic is very simple. One way is to write the reply content and trigger it through a trigger. Another way is to not write the reply content and directly trigger it through external input, and reply with the inputted content.
![Figure 1](./imgs/specialreply.png)
![Figure 2](./imgs/specialreply2.png)

View File

@@ -0,0 +1,20 @@
---
sidebar_position: 1
---
# Introduction to Triggers
Observant students may notice that there is an external input called "Trigger" in each functional module, and it is of type "any".
Its **core function** is to control the timing of module execution. Let's take the "AI Dialogue" module in the two knowledge base search examples below as an example:
| Figure 1 | Figure 2 |
| ---------------------------- | ---------------------------- |
| ![Demo](./imgs/trigger1.png) | ![Demo](./imgs/trigger2.png) |
In the "Knowledge Base Search" module, since the referenced content always has an output, the "Referenced Content" input of the "AI Dialogue" module will always be assigned a value, regardless of whether the content is found or not. If the trigger is not connected (Figure 2), the "AI Dialogue" module will always be executed after the search is completed.
Sometimes, you may want to perform additional processing when there is an empty search, such as replying with fixed content, calling another GPT with different prompts, or sending an HTTP request... In this case, you need to use a trigger and connect the **search result is not empty** with the **trigger**.
When the search result is empty, the "Knowledge Base Search" module will not output the result of **search result is not empty**, so the trigger of the "AI Dialogue" module will always be empty and it will not be executed.
In summary, by understanding the logic of module execution, you can use triggers flexibly:
**Execute when all external input fields (those with connections) are assigned values**.

View File

@@ -0,0 +1,8 @@
# User Questions
- Repeated Addition (to prevent messy lines and improve visual aesthetics in complex workflows)
- No external input
- Flow entry point
- Automatic execution
![](./imgs/chatinput.png)

View File

@@ -0,0 +1,19 @@
---
sidebar_position: 2
---
# Global Variables
- Only one can be added
- Manually configured
- Affects other modules
- Can be used for user guidance
You can set some questions before the conversation starts, allowing users to input or select their answers, and inject the results into other modules. Currently, it can only be injected into string type data (represented by a blue circle).
In the example below, two variables are defined: "Target Language" and "Dropdown Test (Ignore)". Users will be asked to fill in the target language before the conversation starts. With user guidance, we can build a simple translation bot. The key of "Target Language" (language) is written into the qualifiers of the "AI Dialogue" module.
![](./imgs/variable.png)
By examining the complete conversation log, we can see that the actual qualifier changes from "Translate my question directly into {{language}}" to "Translate my question directly into English" because {{language}} is replaced by the variable.
![](./imgs/variable2.png)