mirror of
https://github.com/labring/FastGPT.git
synced 2026-05-10 01:08:08 +08:00
76d6234de6
* Agent features (#6345) * Test agent (#6220) * squash: compress all commits into one * feat: plan response in ui * response ui * perf: agent config * merge * tool select ux * perf: chat ui * perf: agent editform * tmp code * feat: save chat * Complete agent parent (#6049) * add role and tools filling * add: file-upload --------- Co-authored-by: xxyyh <2289112474@qq> * perf: top agent code * top agent (#6062) Co-authored-by: xxyyh <2289112474@qq> * fix: ts * skill editor ui * ui * perf: rewrite type with zod * skill edit ui * skill agent (#6089) * cp skill chat * rebasefdf933dand add skill chat * 1. skill 的 CRUD 2. skill 的信息渲染到前端界面 * solve comment * remove chatid and chatItemId * skill match * perf: skill manage * fix: ts --------- Co-authored-by: xxyyh <2289112474@qq> Co-authored-by: archer <545436317@qq.com> * fix: ts * fix: loop import * skill tool config (#6114) Co-authored-by: xxyyh <2289112474@qq> * feat: load tool in agent * skill memory (#6126) Co-authored-by: xxyyh <2289112474@qq> * perf: agent skill editor * perf: helperbot ui * agent code * perf: context * fix: request context * agent usage * perf: agent context and pause * perf: plan response * Test agent sigle skill (#6184) * feat:top box fill * prompt fix --------- Co-authored-by: xxyyh <2289112474@qq> * perf: agent chat ui * Test agent new (#6219) * have-replan * agent --------- Co-authored-by: xxyyh <2289112474@qq> * fix: ts --------- Co-authored-by: YeYuheng <57035043+YYH211@users.noreply.github.com> Co-authored-by: xxyyh <2289112474@qq> * feat: consolidate agent and MCP improvements This commit consolidates 17 commits including: - MCP tools enhancements and fixes - Agent system improvements and optimizations - Auth limit and prompt updates - Tool response compression and error tracking - Simple app adaptation - Code quality improvements (TypeScript, ESLint, Zod) - Version type migration to schema - Remove deprecated useRequest2 - Add LLM error tracking - Toolset ID validation fixes --------- Co-authored-by: YeYuheng <57035043+YYH211@users.noreply.github.com> Co-authored-by: xxyyh <2289112474@qq> * fix: transform avatar copy;perf: filter invalid tool * update llm response storage time * fix: openapi schema * update skill desc * feat: cache hit data * i18n * lock * chat logs support error filter & user search (#6373) * chat log support searching by user name * support error filter * fix * fix overflow * optimize * fix init script * fix * perf: get log users * updat ecomment * fix: ts * fix: test --------- Co-authored-by: archer <545436317@qq.com> * Fix: agent (#6376) * Agent features (#6345) * Test agent (#6220) * squash: compress all commits into one * feat: plan response in ui * response ui * perf: agent config * merge * tool select ux * perf: chat ui * perf: agent editform * tmp code * feat: save chat * Complete agent parent (#6049) * add role and tools filling * add: file-upload --------- Co-authored-by: xxyyh <2289112474@qq> * perf: top agent code * top agent (#6062) Co-authored-by: xxyyh <2289112474@qq> * fix: ts * skill editor ui * ui * perf: rewrite type with zod * skill edit ui * skill agent (#6089) * cp skill chat * rebasefdf933dand add skill chat * 1. skill 的 CRUD 2. skill 的信息渲染到前端界面 * solve comment * remove chatid and chatItemId * skill match * perf: skill manage * fix: ts --------- Co-authored-by: xxyyh <2289112474@qq> Co-authored-by: archer <545436317@qq.com> * fix: ts * fix: loop import * skill tool config (#6114) Co-authored-by: xxyyh <2289112474@qq> * feat: load tool in agent * skill memory (#6126) Co-authored-by: xxyyh <2289112474@qq> * perf: agent skill editor * perf: helperbot ui * agent code * perf: context * fix: request context * agent usage * perf: agent context and pause * perf: plan response * Test agent sigle skill (#6184) * feat:top box fill * prompt fix --------- Co-authored-by: xxyyh <2289112474@qq> * perf: agent chat ui * Test agent new (#6219) * have-replan * agent --------- Co-authored-by: xxyyh <2289112474@qq> * fix: ts --------- Co-authored-by: YeYuheng <57035043+YYH211@users.noreply.github.com> Co-authored-by: xxyyh <2289112474@qq> * feat: consolidate agent and MCP improvements This commit consolidates 17 commits including: - MCP tools enhancements and fixes - Agent system improvements and optimizations - Auth limit and prompt updates - Tool response compression and error tracking - Simple app adaptation - Code quality improvements (TypeScript, ESLint, Zod) - Version type migration to schema - Remove deprecated useRequest2 - Add LLM error tracking - Toolset ID validation fixes --------- Co-authored-by: YeYuheng <57035043+YYH211@users.noreply.github.com> Co-authored-by: xxyyh <2289112474@qq> * 1. 把辅助生成前端上的 system prompt 加入到上下文中 2. mcp工具的前端渲染(图标) 3. 文件读取工具和文件上传进行关联 4. 添加了辅助生成返回格式出错的重试方案 5. ask 不出现在 plan 步骤中 6. 添加了辅助生成的头像和交互 UI * fix:read_file * helperbot ui * ts error * helper ui * delete Unused import * perf: helper bot * lock --------- Co-authored-by: Archer <545436317@qq.com> Co-authored-by: xxyyh <2289112474@qq> * fix date variable required & model auth (#6386) * fix date variable required & model auth * doc * feat: add chat id to finish callback * fix: iphone safari shareId (#6387) * fix: iphone safari shareId * fix: mcp file list can't setting * fix: reason output field * fix: skip JSON validation for HTTP tool body with variable (#6392) * fix: skip JSON validation for HTTP tool body with variable * doc * workflow fitview * perf: selecting memory * perf: cp api * ui * perf: toolcall auto adapt * fix: catch workflow error * fix: ts * perf: pagination type * remove * ignore * update doc * fix: simple app tool select * add default avatar to logs user * perf: loading user * select dataset ui * rename version * feat: add global/common test * perf: packages/global/common test * feat: package/global/ai,app test * add global/chat test * global/core test * global/core test * feat: packages/global all test * perf: test * add server api test * perf: init shell * perf: init4150 shell * remove invalid code * update doc * remove log * fix: chat effect * fix: plan fake tool (#6398) * 1. 提示词防注入功能 2. 无工具不进入 plan,防止虚拟工具生成 * Agent-dataset * dataset * dataset presetInfo * prefix * perf: prompt --------- Co-authored-by: xxyyh <2289112474@qq> Co-authored-by: archer <545436317@qq.com> * fix: review * adapt kimi2.5 think toolcall * feat: invoke fastgpt user info (#6403) feat: invoke fastgpt user info * fix: invoke fastgpt user info return orgs (#6404) * skill and version * retry helperbot (#6405) Co-authored-by: xxyyh <2289112474@qq> * update template * remove log * doc * update doc * doc * perf: internal ip check * adapt get paginationRecords * tool call adapt * fix: test * doc * fix: agent initial version * adapt completions v1 * feat: instrumentation check * rename skill * add workflow demo mode tracks (#6407) * chore: 统一 skills 目录命名为小写 将 .claude/Skills/ 重命名为 .claude/skills/ 以保持命名一致性。 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * add workflow demo mode tracks * code * optimize * fix: improve workflowDemoTrack based on PR review - Add comment to empty catch block for maintainability - Add @param docs to onDemoChange clarifying nodeCount usage - Replace silent .catch with console.debug for dev debugging - Handle appId changes by reporting old data before re-init Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: archer <545436317@qq.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * remove repeat skill * fix(workflow): filter out orphan edges to prevent runtime errors (#6399) * fix(workflow): filter out orphan edges to prevent runtime errors Runtime edges that reference non-existent nodes (orphan edges) can cause unexpected behavior or crashes during workflow dispatch. This change adds a pre-check to filter out such edges before execution begins, ensuring system stability even with inconsistent graph data. * fix(workflow): enhance orphan edge filtering with logging and tests - Refactor: Extract logic to 'filterOrphanEdges' in utils.ts for better reusability - Feat: Add performance monitoring (warn if >100ms) and comprehensive logging - Feat: Support detailed edge inspection in debug mode - Docs: Add JSDoc explaining causes of orphan edges (migration, manual edits) - Test: Add unit tests covering edge cases and performance (1000 edges) Addresses PR review feedback regarding logging, variable naming, and testing." * move code * move code * add more unit test --------- Co-authored-by: archer <545436317@qq.com> * test * perf: test * add server/common/string test * fix: resolve $ref references in MCP tool input schemas (#6395) (#6409) * fix: resolve $ref references in MCP tool input schemas (#6395) * add test code --------- Co-authored-by: archer <545436317@qq.com> * chore(docs): add fastgpt, fastgpt-plugin version choice guide (#6411) * chore(doc): add fastgpt version description * doc * doc --------- Co-authored-by: archer <545436317@qq.com> * fix:dataset cite and description info (#6410) * 1. 添加知识库引用(plan 步骤和直接知识库调用) 2. 提示词框中的@知识库工具 3. plan 中 step 的 description dataset_search 改为中文 * fix: i18n * prompt * prompt --------- Co-authored-by: xxyyh <2289112474@qq> * fix: tool call * perf: workflow props * fix: merge ECharts toolbox options instead of overwriting (#6269) (#6412) * feat: integrate logtape and otel (#6400) * fix: deps * feat(logger): integrate logtape and otel * wip(log): add basic infras logs * wip(log): add request id and inject it into context * wip(log): add basic tx logs * wip(log): migrate * wip(log): category * wip(log): more sub category * fix: type * fix: sessionRun * fix: export getLogger from client.ts * chore: improve logs * docs: update signoz and changelog * change type * fix: ts * remove skill.md * fix: lockfile specifier * fix: test --------- Co-authored-by: archer <545436317@qq.com> * init log * doc * remove invalid log * fix: review * template * replace new log * fix: ts * remove log * chore: migrate all addLog to logtape * move skill * chore: migrate all addLog to logtape (#6417) * update skill * remove log * fix: tool check --------- Co-authored-by: YeYuheng <57035043+YYH211@users.noreply.github.com> Co-authored-by: xxyyh <2289112474@qq> Co-authored-by: heheer <heheer@sealos.io> Co-authored-by: Finley Ge <32237950+FinleyGe@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: xuyafei1996 <54217479+xuyafei1996@users.noreply.github.com> Co-authored-by: ToukoYui <2331631097@qq.com> Co-authored-by: roy <whoeverimf5@gmail.com>
383 lines
9.4 KiB
TypeScript
383 lines
9.4 KiB
TypeScript
import { describe, it, expect, vi, beforeEach } from 'vitest';
|
|
import fs from 'fs';
|
|
import os from 'os';
|
|
import path from 'path';
|
|
|
|
// Hoist all mock functions so they're available in vi.mock factories
|
|
const {
|
|
mockReadRawContentFromBuffer,
|
|
mockAxiosPost,
|
|
mockDoc2xParsePDF,
|
|
mockTextinParsePDF,
|
|
mockUploadImage2S3Bucket
|
|
} = vi.hoisted(() => ({
|
|
mockReadRawContentFromBuffer: vi.fn(async ({ extension, buffer, encoding }: any) => {
|
|
if (extension === 'txt') {
|
|
return {
|
|
rawText: buffer.toString(encoding || 'utf-8'),
|
|
formatText: buffer.toString(encoding || 'utf-8'),
|
|
imageList: []
|
|
};
|
|
}
|
|
return {
|
|
rawText: `parsed-${extension}-content`,
|
|
formatText: `parsed-${extension}-content`,
|
|
imageList: []
|
|
};
|
|
}),
|
|
mockAxiosPost: vi.fn(),
|
|
mockDoc2xParsePDF: vi.fn().mockResolvedValue({
|
|
pages: 1,
|
|
text: 'doc2x-parsed-text',
|
|
imageList: []
|
|
}),
|
|
mockTextinParsePDF: vi.fn().mockResolvedValue({
|
|
pages: 1,
|
|
text: 'textin-parsed-text',
|
|
imageList: []
|
|
}),
|
|
mockUploadImage2S3Bucket: vi.fn().mockResolvedValue('https://s3.example.com/uploaded-image.png')
|
|
}));
|
|
|
|
vi.mock('@fastgpt/service/worker/function', () => ({
|
|
readRawContentFromBuffer: (...args: any[]) => mockReadRawContentFromBuffer(...args)
|
|
}));
|
|
|
|
vi.mock('@fastgpt/service/common/api/axios', () => ({
|
|
axios: {
|
|
get: vi.fn(),
|
|
post: mockAxiosPost
|
|
}
|
|
}));
|
|
|
|
vi.mock('@fastgpt/service/thirdProvider/doc2x', () => ({
|
|
useDoc2xServer: vi.fn(() => ({
|
|
parsePDF: mockDoc2xParsePDF
|
|
}))
|
|
}));
|
|
|
|
vi.mock('@fastgpt/service/thirdProvider/textin', () => ({
|
|
useTextinServer: vi.fn(() => ({
|
|
parsePDF: mockTextinParsePDF
|
|
}))
|
|
}));
|
|
|
|
vi.mock('@fastgpt/service/support/wallet/usage/controller', () => ({
|
|
createPdfParseUsage: vi.fn()
|
|
}));
|
|
|
|
vi.mock('@fastgpt/service/common/s3/utils', async (importOriginal) => {
|
|
const mod = await importOriginal<typeof import('@fastgpt/service/common/s3/utils')>();
|
|
return {
|
|
...mod,
|
|
uploadImage2S3Bucket: mockUploadImage2S3Bucket
|
|
};
|
|
});
|
|
|
|
import {
|
|
readRawTextByLocalFile,
|
|
readFileContentByBuffer
|
|
} from '@fastgpt/service/common/file/read/utils';
|
|
|
|
const teamId = 'test-team-id';
|
|
const tmbId = 'test-tmb-id';
|
|
|
|
describe('readRawTextByLocalFile', () => {
|
|
let tmpDir: string;
|
|
|
|
beforeEach(() => {
|
|
tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'fastgpt-read-test-'));
|
|
});
|
|
|
|
it('should read a txt file and return its content', async () => {
|
|
const filePath = path.join(tmpDir, 'test.txt');
|
|
fs.writeFileSync(filePath, 'Hello World', 'utf-8');
|
|
|
|
const result = await readRawTextByLocalFile({
|
|
teamId,
|
|
tmbId,
|
|
path: filePath,
|
|
encoding: 'utf-8'
|
|
});
|
|
|
|
expect(result.rawText).toBe('Hello World');
|
|
});
|
|
|
|
it('should extract extension from file path', async () => {
|
|
const filePath = path.join(tmpDir, 'document.pdf');
|
|
fs.writeFileSync(filePath, 'fake-pdf-content');
|
|
|
|
const result = await readRawTextByLocalFile({
|
|
teamId,
|
|
tmbId,
|
|
path: filePath,
|
|
encoding: 'utf-8'
|
|
});
|
|
|
|
expect(result.rawText).toBe('parsed-pdf-content');
|
|
});
|
|
});
|
|
|
|
describe('readFileContentByBuffer', () => {
|
|
beforeEach(() => {
|
|
global.systemEnv = {} as any;
|
|
});
|
|
|
|
it('should parse a txt buffer', async () => {
|
|
const buffer = Buffer.from('Hello from buffer');
|
|
|
|
const result = await readFileContentByBuffer({
|
|
teamId,
|
|
tmbId,
|
|
extension: 'txt',
|
|
buffer,
|
|
encoding: 'utf-8'
|
|
});
|
|
|
|
expect(result.rawText).toBe('Hello from buffer');
|
|
});
|
|
|
|
it('should use system parse for non-pdf files', async () => {
|
|
const buffer = Buffer.from('markdown content');
|
|
|
|
const result = await readFileContentByBuffer({
|
|
teamId,
|
|
tmbId,
|
|
extension: 'md',
|
|
buffer,
|
|
encoding: 'utf-8'
|
|
});
|
|
|
|
expect(result.rawText).toBe('parsed-md-content');
|
|
});
|
|
|
|
it('should use system parse for pdf when customPdfParse is false', async () => {
|
|
const buffer = Buffer.from('pdf content');
|
|
|
|
const result = await readFileContentByBuffer({
|
|
teamId,
|
|
tmbId,
|
|
extension: 'pdf',
|
|
buffer,
|
|
encoding: 'utf-8',
|
|
customPdfParse: false
|
|
});
|
|
|
|
expect(result.rawText).toBe('parsed-pdf-content');
|
|
});
|
|
|
|
it('should use system parse for pdf when customPdfParse is true but no service configured', async () => {
|
|
global.systemEnv = { customPdfParse: {} } as any;
|
|
|
|
const buffer = Buffer.from('pdf content');
|
|
|
|
const result = await readFileContentByBuffer({
|
|
teamId,
|
|
tmbId,
|
|
extension: 'pdf',
|
|
buffer,
|
|
encoding: 'utf-8',
|
|
customPdfParse: true
|
|
});
|
|
|
|
expect(result.rawText).toBe('parsed-pdf-content');
|
|
});
|
|
|
|
it('should return formatText when getFormatText is true', async () => {
|
|
const buffer = Buffer.from('content');
|
|
|
|
const result = await readFileContentByBuffer({
|
|
teamId,
|
|
tmbId,
|
|
extension: 'txt',
|
|
buffer,
|
|
encoding: 'utf-8',
|
|
getFormatText: true
|
|
});
|
|
|
|
expect(result.rawText).toBeDefined();
|
|
});
|
|
|
|
it('should return rawText when getFormatText is false', async () => {
|
|
const buffer = Buffer.from('content');
|
|
|
|
const result = await readFileContentByBuffer({
|
|
teamId,
|
|
tmbId,
|
|
extension: 'txt',
|
|
buffer,
|
|
encoding: 'utf-8',
|
|
getFormatText: false
|
|
});
|
|
|
|
expect(result.rawText).toBe('content');
|
|
});
|
|
|
|
it('should use custom URL service for pdf when configured', async () => {
|
|
global.systemEnv = {
|
|
customPdfParse: { url: 'http://custom-pdf-service.com/parse', key: 'test-key' }
|
|
} as any;
|
|
|
|
mockAxiosPost.mockResolvedValueOnce({
|
|
data: {
|
|
pages: 3,
|
|
markdown: 'custom-service-parsed-text'
|
|
}
|
|
});
|
|
|
|
const buffer = Buffer.from('pdf content');
|
|
|
|
const result = await readFileContentByBuffer({
|
|
teamId,
|
|
tmbId,
|
|
extension: 'pdf',
|
|
buffer,
|
|
encoding: 'utf-8',
|
|
customPdfParse: true
|
|
});
|
|
|
|
expect(result.rawText).toBe('custom-service-parsed-text');
|
|
});
|
|
|
|
it('should use textin service for pdf when textinAppId is configured', async () => {
|
|
global.systemEnv = {
|
|
customPdfParse: { textinAppId: 'app-id', textinSecretCode: 'secret' }
|
|
} as any;
|
|
|
|
const buffer = Buffer.from('pdf content');
|
|
|
|
const result = await readFileContentByBuffer({
|
|
teamId,
|
|
tmbId,
|
|
extension: 'pdf',
|
|
buffer,
|
|
encoding: 'utf-8',
|
|
customPdfParse: true
|
|
});
|
|
|
|
expect(result.rawText).toBe('textin-parsed-text');
|
|
});
|
|
|
|
it('should use doc2x service for pdf when doc2xKey is configured', async () => {
|
|
global.systemEnv = {
|
|
customPdfParse: { doc2xKey: 'doc2x-api-key' }
|
|
} as any;
|
|
|
|
const buffer = Buffer.from('pdf content');
|
|
|
|
const result = await readFileContentByBuffer({
|
|
teamId,
|
|
tmbId,
|
|
extension: 'pdf',
|
|
buffer,
|
|
encoding: 'utf-8',
|
|
customPdfParse: true
|
|
});
|
|
|
|
expect(result.rawText).toBe('doc2x-parsed-text');
|
|
});
|
|
|
|
it('should reject when custom URL service returns error', async () => {
|
|
global.systemEnv = {
|
|
customPdfParse: { url: 'http://custom-pdf-service.com/parse' }
|
|
} as any;
|
|
|
|
mockAxiosPost.mockResolvedValueOnce({
|
|
data: {
|
|
pages: 0,
|
|
markdown: '',
|
|
error: 'Parse failed'
|
|
}
|
|
});
|
|
|
|
const buffer = Buffer.from('pdf content');
|
|
|
|
await expect(
|
|
readFileContentByBuffer({
|
|
teamId,
|
|
tmbId,
|
|
extension: 'pdf',
|
|
buffer,
|
|
encoding: 'utf-8',
|
|
customPdfParse: true
|
|
})
|
|
).rejects.toBe('Parse failed');
|
|
});
|
|
|
|
it('should fallback to system parse when custom URL service url is empty', async () => {
|
|
global.systemEnv = {
|
|
customPdfParse: { url: '' }
|
|
} as any;
|
|
|
|
const buffer = Buffer.from('pdf content');
|
|
|
|
const result = await readFileContentByBuffer({
|
|
teamId,
|
|
tmbId,
|
|
extension: 'pdf',
|
|
buffer,
|
|
encoding: 'utf-8',
|
|
customPdfParse: true
|
|
});
|
|
|
|
expect(result.rawText).toBe('parsed-pdf-content');
|
|
});
|
|
|
|
it('should process images from parsed document with imageKeyOptions', async () => {
|
|
mockReadRawContentFromBuffer.mockResolvedValueOnce({
|
|
rawText: 'text with ',
|
|
formatText: 'text with ',
|
|
imageList: [
|
|
{
|
|
uuid: 'IMAGE_abc123_IMAGE',
|
|
base64: 'iVBORw0KGgo=',
|
|
mime: 'image/png'
|
|
}
|
|
]
|
|
});
|
|
|
|
const buffer = Buffer.from('content with images');
|
|
|
|
const result = await readFileContentByBuffer({
|
|
teamId,
|
|
tmbId,
|
|
extension: 'md',
|
|
buffer,
|
|
encoding: 'utf-8',
|
|
imageKeyOptions: {
|
|
prefix: 'test/prefix'
|
|
}
|
|
});
|
|
|
|
expect(result.rawText).toContain('https://s3.example.com/uploaded-image.png');
|
|
expect(result.rawText).not.toContain('IMAGE_abc123_IMAGE');
|
|
});
|
|
|
|
it('should skip image upload when imageKeyOptions is not provided', async () => {
|
|
mockReadRawContentFromBuffer.mockResolvedValueOnce({
|
|
rawText: 'text with ',
|
|
formatText: 'text with ',
|
|
imageList: [
|
|
{
|
|
uuid: 'IMAGE_abc123_IMAGE',
|
|
base64: 'iVBORw0KGgo=',
|
|
mime: 'image/png'
|
|
}
|
|
]
|
|
});
|
|
|
|
const buffer = Buffer.from('content with images');
|
|
|
|
const result = await readFileContentByBuffer({
|
|
teamId,
|
|
tmbId,
|
|
extension: 'md',
|
|
buffer,
|
|
encoding: 'utf-8'
|
|
});
|
|
|
|
// Without imageKeyOptions, images get empty string replacement
|
|
expect(result.rawText).not.toContain('IMAGE_abc123_IMAGE');
|
|
});
|
|
});
|