Files
FastGPT/packages/service/common/vectorDB/milvus/index.ts
T
Archer 76d6234de6 V4.14.7 features (#6406)
* Agent features (#6345)

* Test agent (#6220)

* squash: compress all commits into one

* feat: plan response in ui

* response ui

* perf: agent config

* merge

* tool select ux

* perf: chat ui

* perf: agent editform

* tmp code

* feat: save chat

* Complete agent parent  (#6049)

* add role and tools filling

* add: file-upload

---------

Co-authored-by: xxyyh <2289112474@qq>

* perf: top agent code

* top agent (#6062)

Co-authored-by: xxyyh <2289112474@qq>

* fix: ts

* skill editor ui

* ui

* perf: rewrite type with zod

* skill edit ui

* skill agent (#6089)

* cp skill chat

* rebase fdf933d
 and add skill chat

* 1. skill 的 CRUD
2. skill 的信息渲染到前端界面

* solve comment

* remove chatid and chatItemId

* skill match

* perf: skill manage

* fix: ts

---------

Co-authored-by: xxyyh <2289112474@qq>
Co-authored-by: archer <545436317@qq.com>

* fix: ts

* fix: loop import

* skill tool config (#6114)

Co-authored-by: xxyyh <2289112474@qq>

* feat: load tool in agent

* skill memory (#6126)

Co-authored-by: xxyyh <2289112474@qq>

* perf: agent skill editor

* perf: helperbot ui

* agent code

* perf: context

* fix: request context

* agent usage

* perf: agent context and pause

* perf: plan response

* Test agent sigle skill (#6184)

* feat:top box fill

* prompt fix

---------

Co-authored-by: xxyyh <2289112474@qq>

* perf: agent chat ui

* Test agent new (#6219)

* have-replan

* agent

---------

Co-authored-by: xxyyh <2289112474@qq>

* fix: ts

---------

Co-authored-by: YeYuheng <57035043+YYH211@users.noreply.github.com>
Co-authored-by: xxyyh <2289112474@qq>

* feat: consolidate agent and MCP improvements

This commit consolidates 17 commits including:
- MCP tools enhancements and fixes
- Agent system improvements and optimizations
- Auth limit and prompt updates
- Tool response compression and error tracking
- Simple app adaptation
- Code quality improvements (TypeScript, ESLint, Zod)
- Version type migration to schema
- Remove deprecated useRequest2
- Add LLM error tracking
- Toolset ID validation fixes

---------

Co-authored-by: YeYuheng <57035043+YYH211@users.noreply.github.com>
Co-authored-by: xxyyh <2289112474@qq>

* fix: transform avatar copy;perf: filter invalid tool

* update llm response storage time

* fix: openapi schema

* update skill desc

* feat: cache hit data

* i18n

* lock

* chat logs support error filter & user search (#6373)

* chat log support searching by user name

* support error filter

* fix

* fix overflow

* optimize

* fix init script

* fix

* perf: get log users

* updat ecomment

* fix: ts

* fix: test

---------

Co-authored-by: archer <545436317@qq.com>

* Fix: agent  (#6376)

* Agent features (#6345)

* Test agent (#6220)

* squash: compress all commits into one

* feat: plan response in ui

* response ui

* perf: agent config

* merge

* tool select ux

* perf: chat ui

* perf: agent editform

* tmp code

* feat: save chat

* Complete agent parent  (#6049)

* add role and tools filling

* add: file-upload

---------

Co-authored-by: xxyyh <2289112474@qq>

* perf: top agent code

* top agent (#6062)

Co-authored-by: xxyyh <2289112474@qq>

* fix: ts

* skill editor ui

* ui

* perf: rewrite type with zod

* skill edit ui

* skill agent (#6089)

* cp skill chat

* rebase fdf933d
 and add skill chat

* 1. skill 的 CRUD
2. skill 的信息渲染到前端界面

* solve comment

* remove chatid and chatItemId

* skill match

* perf: skill manage

* fix: ts

---------

Co-authored-by: xxyyh <2289112474@qq>
Co-authored-by: archer <545436317@qq.com>

* fix: ts

* fix: loop import

* skill tool config (#6114)

Co-authored-by: xxyyh <2289112474@qq>

* feat: load tool in agent

* skill memory (#6126)

Co-authored-by: xxyyh <2289112474@qq>

* perf: agent skill editor

* perf: helperbot ui

* agent code

* perf: context

* fix: request context

* agent usage

* perf: agent context and pause

* perf: plan response

* Test agent sigle skill (#6184)

* feat:top box fill

* prompt fix

---------

Co-authored-by: xxyyh <2289112474@qq>

* perf: agent chat ui

* Test agent new (#6219)

* have-replan

* agent

---------

Co-authored-by: xxyyh <2289112474@qq>

* fix: ts

---------

Co-authored-by: YeYuheng <57035043+YYH211@users.noreply.github.com>
Co-authored-by: xxyyh <2289112474@qq>

* feat: consolidate agent and MCP improvements

This commit consolidates 17 commits including:
- MCP tools enhancements and fixes
- Agent system improvements and optimizations
- Auth limit and prompt updates
- Tool response compression and error tracking
- Simple app adaptation
- Code quality improvements (TypeScript, ESLint, Zod)
- Version type migration to schema
- Remove deprecated useRequest2
- Add LLM error tracking
- Toolset ID validation fixes

---------

Co-authored-by: YeYuheng <57035043+YYH211@users.noreply.github.com>
Co-authored-by: xxyyh <2289112474@qq>

* 1. 把辅助生成前端上的 system prompt 加入到上下文中
2. mcp工具的前端渲染(图标)
3. 文件读取工具和文件上传进行关联
4. 添加了辅助生成返回格式出错的重试方案
5. ask 不出现在 plan 步骤中
6. 添加了辅助生成的头像和交互 UI

* fix:read_file

* helperbot ui

* ts error

* helper ui

* delete Unused import

* perf: helper bot

* lock

---------

Co-authored-by: Archer <545436317@qq.com>
Co-authored-by: xxyyh <2289112474@qq>

* fix date variable required & model auth (#6386)

* fix date variable required & model auth

* doc

* feat: add chat id to finish callback

* fix: iphone safari shareId (#6387)

* fix: iphone safari shareId

* fix: mcp file list can't setting

* fix: reason output field

* fix: skip JSON validation for HTTP tool body with variable (#6392)

* fix: skip JSON validation for HTTP tool body with variable

* doc

* workflow fitview

* perf: selecting memory

* perf: cp api

* ui

* perf: toolcall auto adapt

* fix: catch workflow error

* fix: ts

* perf: pagination type

* remove

* ignore

* update doc

* fix: simple app tool select

* add default avatar to logs user

* perf: loading user

* select dataset ui

* rename version

* feat: add global/common test

* perf: packages/global/common test

* feat: package/global/ai,app test

* add global/chat test

* global/core test

* global/core test

* feat: packages/global all test

* perf: test

* add server api test

* perf: init shell

* perf: init4150 shell

* remove invalid code

* update doc

* remove log

* fix: chat effect

* fix: plan fake tool  (#6398)

* 1. 提示词防注入功能
2. 无工具不进入 plan,防止虚拟工具生成

* Agent-dataset

* dataset

* dataset presetInfo

* prefix

* perf: prompt

---------

Co-authored-by: xxyyh <2289112474@qq>
Co-authored-by: archer <545436317@qq.com>

* fix: review

* adapt kimi2.5 think toolcall

* feat: invoke fastgpt user info (#6403)

feat: invoke fastgpt user info

* fix: invoke fastgpt user info return orgs (#6404)

* skill and version

* retry helperbot (#6405)

Co-authored-by: xxyyh <2289112474@qq>

* update template

* remove log

* doc

* update doc

* doc

* perf: internal ip check

* adapt get paginationRecords

* tool call adapt

* fix: test

* doc

* fix: agent initial version

* adapt completions v1

* feat: instrumentation check

* rename skill

* add workflow demo mode tracks (#6407)

* chore: 统一 skills 目录命名为小写

将 .claude/Skills/ 重命名为 .claude/skills/ 以保持命名一致性。

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* add workflow demo mode tracks

* code

* optimize

* fix: improve workflowDemoTrack based on PR review

- Add comment to empty catch block for maintainability
- Add @param docs to onDemoChange clarifying nodeCount usage
- Replace silent .catch with console.debug for dev debugging
- Handle appId changes by reporting old data before re-init

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: archer <545436317@qq.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* remove repeat skill

* fix(workflow): filter out orphan edges to prevent runtime errors (#6399)

* fix(workflow): filter out orphan edges to prevent runtime errors

Runtime edges that reference non-existent nodes (orphan edges) can cause
unexpected behavior or crashes during workflow dispatch. This change adds
a pre-check to filter out such edges before execution begins, ensuring
system stability even with inconsistent graph data.

* fix(workflow): enhance orphan edge filtering with logging and tests

- Refactor: Extract logic to 'filterOrphanEdges' in utils.ts for better reusability
- Feat: Add performance monitoring (warn if >100ms) and comprehensive logging
- Feat: Support detailed edge inspection in debug mode
- Docs: Add JSDoc explaining causes of orphan edges (migration, manual edits)
- Test: Add unit tests covering edge cases and performance (1000 edges)

Addresses PR review feedback regarding logging, variable naming, and testing."

* move code

* move code

* add more unit test

---------

Co-authored-by: archer <545436317@qq.com>

* test

* perf: test

* add server/common/string test

* fix: resolve $ref references in MCP tool input schemas (#6395) (#6409)

* fix: resolve $ref references in MCP tool input schemas (#6395)

* add test code

---------

Co-authored-by: archer <545436317@qq.com>

* chore(docs): add fastgpt, fastgpt-plugin version choice guide (#6411)

* chore(doc): add fastgpt version description

* doc

* doc

---------

Co-authored-by: archer <545436317@qq.com>

* fix:dataset cite and description info (#6410)

* 1. 添加知识库引用(plan 步骤和直接知识库调用)
2. 提示词框中的@知识库工具
3. plan 中 step 的 description dataset_search 改为中文

* fix: i18n

* prompt

* prompt

---------

Co-authored-by: xxyyh <2289112474@qq>

* fix: tool call

* perf: workflow props

* fix: merge ECharts toolbox options instead of overwriting (#6269) (#6412)

* feat: integrate logtape and otel (#6400)

* fix: deps

* feat(logger): integrate logtape and otel

* wip(log): add basic infras logs

* wip(log): add request id and inject it into context

* wip(log): add basic tx logs

* wip(log): migrate

* wip(log): category

* wip(log): more sub category

* fix: type

* fix: sessionRun

* fix: export getLogger from client.ts

* chore: improve logs

* docs: update signoz and changelog

* change type

* fix: ts

* remove skill.md

* fix: lockfile specifier

* fix: test

---------

Co-authored-by: archer <545436317@qq.com>

* init log

* doc

* remove invalid log

* fix: review

* template

* replace new log

* fix: ts

* remove log

* chore: migrate all addLog to logtape

* move skill

* chore: migrate all addLog to logtape (#6417)

* update skill

* remove log

* fix: tool check

---------

Co-authored-by: YeYuheng <57035043+YYH211@users.noreply.github.com>
Co-authored-by: xxyyh <2289112474@qq>
Co-authored-by: heheer <heheer@sealos.io>
Co-authored-by: Finley Ge <32237950+FinleyGe@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: xuyafei1996 <54217479+xuyafei1996@users.noreply.github.com>
Co-authored-by: ToukoYui <2331631097@qq.com>
Co-authored-by: roy <whoeverimf5@gmail.com>
2026-02-12 16:37:50 +08:00

322 lines
9.6 KiB
TypeScript

import { DataType, LoadState, MilvusClient } from '@zilliz/milvus2-sdk-node';
import {
DatasetVectorDbName,
DatasetVectorTableName,
MILVUS_ADDRESS,
MILVUS_TOKEN
} from '../constants';
import type { VectorControllerType } from '../type';
import { retryFn } from '@fastgpt/global/common/system/utils';
import { getLogger, LogCategories } from '../../logger';
import { customNanoid } from '@fastgpt/global/common/string/tools';
const logger = getLogger(LogCategories.INFRA.VECTOR);
export class MilvusCtrl implements VectorControllerType {
constructor() {}
getClient = async () => {
if (!MILVUS_ADDRESS) {
return Promise.reject('MILVUS_ADDRESS is not set');
}
if (global.milvusClient) return global.milvusClient;
global.milvusClient = new MilvusClient({
address: MILVUS_ADDRESS,
token: MILVUS_TOKEN
});
await global.milvusClient.connectPromise;
logger.info('Milvus connected', { address: MILVUS_ADDRESS });
return global.milvusClient;
};
init: VectorControllerType['init'] = async () => {
const client = await this.getClient();
// init db(zilliz cloud will error)
try {
const { db_names } = await client.listDatabases();
if (!db_names.includes(DatasetVectorDbName)) {
await client.createDatabase({
db_name: DatasetVectorDbName
});
}
await client.useDatabase({
db_name: DatasetVectorDbName
});
} catch (error) {
logger.warn('Milvus database initialization skipped or failed', { error });
}
// init collection and index
const { value: hasCollection } = await client.hasCollection({
collection_name: DatasetVectorTableName
});
if (!hasCollection) {
const result = await client.createCollection({
collection_name: DatasetVectorTableName,
description: 'Store dataset vector',
enableDynamicField: true,
fields: [
{
name: 'id',
data_type: DataType.Int64,
is_primary_key: true,
autoID: false // disable auto id, and we need to set id in insert
},
{
name: 'vector',
data_type: DataType.FloatVector,
dim: 1536
},
{ name: 'teamId', data_type: DataType.VarChar, max_length: 64 },
{ name: 'datasetId', data_type: DataType.VarChar, max_length: 64 },
{ name: 'collectionId', data_type: DataType.VarChar, max_length: 64 },
{
name: 'createTime',
data_type: DataType.Int64
}
],
index_params: [
{
field_name: 'vector',
index_name: 'vector_HNSW',
index_type: 'HNSW',
metric_type: 'IP',
params: { efConstruction: 32, M: 64 }
},
{
field_name: 'teamId',
index_type: 'Trie'
},
{
field_name: 'datasetId',
index_type: 'Trie'
},
{
field_name: 'collectionId',
index_type: 'Trie'
},
{
field_name: 'createTime',
index_type: 'STL_SORT'
}
]
});
logger.info('Milvus collection created', {
collection: DatasetVectorTableName,
result
});
}
const { state: colLoadState } = await client.getLoadState({
collection_name: DatasetVectorTableName
});
if (
colLoadState === LoadState.LoadStateNotExist ||
colLoadState === LoadState.LoadStateNotLoad
) {
await client.loadCollectionSync({
collection_name: DatasetVectorTableName
});
logger.info('Milvus collection loaded', { collection: DatasetVectorTableName });
}
};
insert: VectorControllerType['insert'] = async (props) => {
const client = await this.getClient();
const { teamId, datasetId, collectionId, vectors } = props;
const generateId = () => {
// in js, the max safe integer is 2^53 - 1: 9007199254740991
// so we can generate a random number between 1-8 as the first digit
// and the rest 15 digits can be random
const firstDigit = customNanoid('12345678', 1);
const restDigits = customNanoid('1234567890', 15);
return Number(`${firstDigit}${restDigits}`);
};
const result = await client.insert({
collection_name: DatasetVectorTableName,
data: vectors.map((vector) => ({
id: generateId(),
vector,
teamId: String(teamId),
datasetId: String(datasetId),
collectionId: String(collectionId),
createTime: Date.now()
}))
});
const insertIds = (() => {
if ('int_id' in result.IDs) {
return result.IDs.int_id.data.map((id) => String(id));
}
return result.IDs.str_id.data.map((id) => String(id));
})();
return {
insertIds
};
};
delete: VectorControllerType['delete'] = async (props) => {
const { teamId } = props;
const client = await this.getClient();
const teamIdWhere = `(teamId=="${String(teamId)}")`;
const where = await (() => {
if ('id' in props && props.id) return `(id==${props.id})`;
if ('datasetIds' in props && props.datasetIds) {
const datasetIdWhere = `(datasetId in [${props.datasetIds
.map((id) => `"${String(id)}"`)
.join(',')}])`;
if ('collectionIds' in props && props.collectionIds) {
return `${datasetIdWhere} and (collectionId in [${props.collectionIds
.map((id) => `"${String(id)}"`)
.join(',')}])`;
}
return `${datasetIdWhere}`;
}
if ('idList' in props && Array.isArray(props.idList)) {
if (props.idList.length === 0) return;
return `(id in [${props.idList.map((id) => String(id)).join(',')}])`;
}
return Promise.reject('deleteDatasetData: no where');
})();
if (!where) return;
const concatWhere = `${teamIdWhere} and ${where}`;
await client.delete({
collection_name: DatasetVectorTableName,
filter: concatWhere
});
};
embRecall: VectorControllerType['embRecall'] = async (props) => {
const client = await this.getClient();
const { teamId, datasetIds, vector, limit, forbidCollectionIdList, filterCollectionIdList } =
props;
// Forbid collection
const formatForbidCollectionIdList = (() => {
if (!filterCollectionIdList) return forbidCollectionIdList;
const list = forbidCollectionIdList
.map((id) => String(id))
.filter((id) => !filterCollectionIdList.includes(id));
return list;
})();
const forbidColQuery =
formatForbidCollectionIdList.length > 0
? `and (collectionId not in [${formatForbidCollectionIdList.map((id) => `"${id}"`).join(',')}])`
: '';
// filter collection id
const formatFilterCollectionId = (() => {
if (!filterCollectionIdList) return;
return filterCollectionIdList
.map((id) => String(id))
.filter((id) => !forbidCollectionIdList.includes(id));
})();
const collectionIdQuery = formatFilterCollectionId
? `and (collectionId in [${formatFilterCollectionId.map((id) => `"${id}"`).join(',')}])`
: ``;
// Empty data
if (formatFilterCollectionId && formatFilterCollectionId.length === 0) {
return { results: [] };
}
const filterStr =
`(teamId == "${teamId}") and (datasetId in [${datasetIds.map((id) => `"${id}"`).join(',')}]) ${collectionIdQuery} ${forbidColQuery}`.trim();
const searchResult = await retryFn(() =>
client.search({
collection_name: DatasetVectorTableName,
vector: vector,
limit,
expr: filterStr,
output_fields: ['collectionId']
})
);
const rows = (searchResult.results || []) as {
score: number;
id: string;
collectionId: string;
}[];
return {
results: rows.map((item) => ({
id: String(item.id),
collectionId: item.collectionId,
score: item.score
}))
};
};
getVectorCount: VectorControllerType['getVectorCount'] = async (props) => {
const { teamId, datasetId, collectionId } = props;
const client = await this.getClient();
// Build filter conditions dynamically (each condition wrapped in parentheses)
const filterConditions: string[] = [];
if (teamId) {
filterConditions.push(`(teamId == "${String(teamId)}")`);
}
if (datasetId) {
filterConditions.push(`(datasetId == "${String(datasetId)}")`);
}
if (collectionId) {
filterConditions.push(`(collectionId == "${String(collectionId)}")`);
}
// If no conditions provided, count all (empty filter)
const filter = filterConditions.length > 0 ? filterConditions.join(' and ') : '';
const result = await client.query({
collection_name: DatasetVectorTableName,
output_fields: ['count(*)'],
filter: filter || undefined
});
const total = result.data?.[0]?.['count(*)'];
return Number(total);
};
getVectorDataByTime: VectorControllerType['getVectorDataByTime'] = async (start, end) => {
const client = await this.getClient();
const startTimestamp = new Date(start).getTime();
const endTimestamp = new Date(end).getTime();
const result = await client.query({
collection_name: DatasetVectorTableName,
output_fields: ['id', 'teamId', 'datasetId'],
filter: `(createTime >= ${startTimestamp}) and (createTime <= ${endTimestamp})`
});
const rows = result.data as {
id: string;
teamId: string;
datasetId: string;
}[];
return rows.map((item) => ({
id: String(item.id),
teamId: item.teamId,
datasetId: item.datasetId
}));
};
}