mirror of
https://github.com/labring/FastGPT.git
synced 2026-05-10 01:08:08 +08:00
7506a147e6
* batch node (#6732) * batch node * docs: add local code quality standards and style guides for automated review * refactor: remove enforced minimum for parallel concurrency, simplify edge handling in task runtime context, and fix loop output mapping * feat: auto-infer and sync valueType for parallel loop input and output based on referenced array source * fix: refactor parallelRun output type synchronization and improve sub-workflow error handling in dispatch service * feat: enforce parallel concurrency limits and validate against workflow loop constraints * feat: implement retry mechanism for parallel workflow tasks with usage tracking per attempt * fix review * perf: use function * refactor: abstract nested node logic into useNestedNode hook and update parallelRun icon/service logic * fix: type import * refactor: update ParallelRunStatusEnum and i18n labels for improved status clarity * feat: parallel run details and input/output display to chat response modal and service dispatch * fix: config limit error * refactor: optimize parallel run task execution, fix point accumulation, and improve error handling for sub-workflows * fix: include totalPoints in parallel task results * refactor: centralize nested input injection and point safety utilities for workflow dispatchers * test: add unit tests for safePoints utility function * refactor: update parallel workflow runtime types and clean up docstring placement in dispatch utils * fix: include all runtime nodes in parallel execution to ensure variable reference accessibility * refactor: update pushSubWorkflowUsage signature to use object parameter for improved consistency --------- Co-authored-by: DigHuang <114602213+DigHuang@users.noreply.github.com> * feat(s3): add proxy transfer mode with tokenized upload/download (#6729) * feat(s3): add proxy transfer mode with tokenized upload/download * wip: switch to proxy mode for upload progress * fix: office mime types * fix(s3): upload MIME validation, multer whitelist, API error status - Treat AVI/MPEG mime aliases (incl. video/mp1s vs video/mpeg) as matching - Optional allowedExtensions on multer for dataset images and localFile - Map S3/business errors to 4xx in jsonRes where appropriate - Align presign max size with team plan; fix dataset import size UX - Add upload validation tests Made-with: Cursor * fix: show clear message when upload frequency limit is exceeded - Reject ERROR_ENUM.uploadFileIntervalLimit from authFrequencyLimit instead of Mongo doc - Add i18n for upload_file_interval_limit (zh-CN/en/zh-Hant) Made-with: Cursor * fix file token validation and upload mime checks * fix: test * fix(s3): treat m4a audio/mp4 and audio/x-m4a as equivalent - Add MIME equivalence group for AAC/M4A container mismatch (mime-types vs file-type) - Add upload validation test for minimal ftyp/M4A buffer - Test env: keep FILE_TOKEN_KEY in vitest test.env and test/setup.ts (drop loadTestEnv file) Made-with: Cursor * fix(chat): 调试区文件类型与编辑态一致,并修复 accept 在 WebKit 下不更新 - ChatTest: 用 getAppChatConfig + getGuideModule 合并画布引导节点与 chatConfig - useChatTest: 依赖 fileSelectConfig 序列化与 chatConfig,避免深层变更未触发预览更新 - useSelectFile: 用 useCallback + input key 替代 useMemoizedFn,确保 accept 变更后重建 input Made-with: Cursor * fix: invalid request * feat: prompt inject (#6757) * feat: resume chat stream (#6722) * fix: openapi schema issue while creating openapi json * feat: resume chat stream * wip: chat status and read status * feat: sync chat side bar status * fix: allow reassignment of variables in chatTest handler Made-with: Cursor * feat(chat): stream resume hardening, resume modules in @fastgpt/service, stale generating cron - Move stream resume mirror + resumeStatus into packages/service; update API imports - chatTest: ensurePendingChatRoundItems, default responseChatItemId; zod default import for client - useChatTest + HomeChatWindow: enableAutoResume and sync init chatGenerateStatus - ChatContext: safe no-op defaults without provider - Cron: clean MongoChat stuck in generating >30min; timer lock cleanStaleGeneratingChat Made-with: Cursor * fix(chat): address stream-resume PR review (zod/mongoose enum, legacy status, upsert, UI race) - Zod: use z.nativeEnum(ChatGenerateStatusEnum); mongoose chatGenerateStatus enum as [0,1,2] only - Init APIs: default missing chatGenerateStatus to done before read/unread logic - ensurePendingChatRoundItems: unique index + upsert; rename ChatGenerateStatusEnum - ChatBox auto-resume: guard by chatId; sidebar sync via targetChatId - Tests: chat history/feedback APIs pass with schema fixes Made-with: Cursor * fix(chat): expose resume at /api/v2/chat/resume; openapi + review tidy - Move handler from v1/stream to v2/chat/resume (pairs with v2 completions + Redis mirror) - Update fetch, OpenAPI AIPath, comments; remove slim projects/app global chat api - getHistoryStatus default chatGenerateStatus; team init + chatTest notes; ChatItem tweak Made-with: Cursor * fix(chat): fix resume JSON parse catch shadowing; drop unused resumeChatStream Made-with: Cursor * docs(chat): comment closed+stream mirror write path in workflow dispatch Made-with: Cursor * refactor: unify resumable stream mirroring * fix: keep v1 chat completions out of resume flow * refactor: make prepared chat rounds transactional * fix: handle resume stream terminal errors * fix: rerank max token * feat(workflow): extend variable update node with Number/Boolean/Array operations (#6752) * feat(workflow): extend variable update node with Number/Boolean/Array ops * feat: math operator icons and refactor variable update renderers for improved layout and consistency * chore(workflow): clean up variable update types and restore icon cleanup * feat: add test * fix:md_ascii_bug (#6755) * md_ascii_bug * md_ascii_bug * md_ascii_bug * md_ascii_bug * md_ascii_bug * perf: test --------- Co-authored-by: archer <545436317@qq.com> * doc * del dataset * perf: date auto coerce * doc * add test * perf: channel setting * doc * fix: chat resume stream (#6759) * refactor(api): move stream resume to /api/core/chat/resume Relocate resume handler from pages/api/v2 to pages/api/core, update OpenAPI paths, frontend streamResumeFetch URL, tests, and comments. Made-with: Cursor * fix: remove stray conflict markers; use z.nativeEnum for chatGenerateStatus Made-with: Cursor * fix: use enum instead of nativeEnum * fix(chat): address resume review suggestions * fix(chat): require sse when resuming generating chats * revert(chat): keep chatitem dataId index non-unique * fix: ts * fix doc * fix(chat): gate stream resume mirror by header (#6760) * fix: remove stray conflict markers; use z.nativeEnum for chatGenerateStatus Made-with: Cursor * fix: use enum instead of nativeEnum * fix(chat): address resume review suggestions * fix(chat): require sse when resuming generating chats * feat(chat): gate stream resume mirror by header * refactor(chat): decouple resume mirror header parsing * perf: dataset queue * fix: multipleselect * perf: workflow bug * doc * doc * perf: deploy yml;fix: child nodes watch * adapt embedding model defaultconfig * install shell * add mcp zod check * feat: http tool zod schema * Feat/batch UI (#6763) * feat: aggregate parallel run results into task-specific virtual nodes and update UI to support i18n arguments for module names * style: update workflow node card padding and table styling for improved layout consistency * feat: implement parallel run workflow node with documentation and i18n support * style(modal): WholeResponseModal UI and layout styling * chore: improve chat resume UX (#6764) * fix: remove stray conflict markers; use z.nativeEnum for chatGenerateStatus Made-with: Cursor * fix: use enum instead of nativeEnum * fix(chat): address resume review suggestions * fix(chat): require sse when resuming generating chats * feat(chat): gate stream resume mirror by header * refactor(chat): decouple resume mirror header parsing * feat: improve stream resume fallback * feat: block duplicate chat generation * feat: polish resume unavailable recovery * test: stabilize resume stream timeout * fix: harden resume wait flow * fix: get mcp tool raw schema * style: update UI styling and layout for LLM request detail and response modals * perf: http tool * fix: test * fix: http raw schema * fix: test * deploy yml * deploy yml --------- Co-authored-by: DigHuang <114602213+DigHuang@users.noreply.github.com> Co-authored-by: Ryo <whoeverimf5@gmail.com> Co-authored-by: YeYuheng <57035043+YYH211@users.noreply.github.com>
107 lines
4.9 KiB
Plaintext
107 lines
4.9 KiB
Plaintext
---
|
|
title: Object Storage Configuration
|
|
description: How to configure and connect to various object storage providers via environment variables, and common configuration issues
|
|
---
|
|
|
|
import { Alert } from '@/components/docs/Alert';
|
|
import FastGPTLink from '@/components/docs/linkFastGPT';
|
|
|
|
## Object Storage Configuration
|
|
|
|
This guide covers environment variable configuration for object storage providers supported by FastGPT, including self-hosted MinIO, AWS S3, Alibaba Cloud OSS, and Tencent Cloud COS.
|
|
|
|
### Common Required Environment Variables
|
|
|
|
> - Temporary credential authentication (e.g., STS) is not supported. Ensure service security on your own.
|
|
> - Private bucket reuse is not supported. If you set the private and public bucket names to the same value, ensure the bucket policy is at least **public read, private write**.
|
|
|
|
- `STORAGE_VENDOR` Enum value. Options: `minio`, `aws-s3`, `oss`, `cos`.
|
|
- `STORAGE_REGION` Region where the object storage service is located, e.g., `us-east-1`. Refer to your provider's region list. For self-hosted MinIO, any value works.
|
|
- `STORAGE_ACCESS_KEY_ID` Access Key ID for the service credentials
|
|
- `STORAGE_SECRET_ACCESS_KEY` Secret Access Key for the service credentials
|
|
- `STORAGE_PUBLIC_BUCKET` FastGPT public resource bucket name
|
|
- `STORAGE_PRIVATE_BUCKET` FastGPT private resource bucket name
|
|
|
|
### Transfer Behavior
|
|
|
|
- Uploads always go through the FastGPT backend proxy.
|
|
- Downloads support both `proxy` and `presigned` modes.
|
|
- The default download mode is inferred from `STORAGE_EXTERNAL_ENDPOINT`:
|
|
- not configured: default to `proxy`
|
|
- configured: default to `presigned`
|
|
|
|
### Self-Hosted MinIO and AWS S3
|
|
|
|
> MinIO has strong AWS S3 protocol support, so MinIO and AWS S3 configurations are nearly identical — differences come from provider-specific or self-hosted requirements.
|
|
> In theory, any object storage with S3 protocol support comparable to MinIO will work, such as SeaweedFS, RustFS, etc.
|
|
|
|
- `STORAGE_S3_ENDPOINT` Internal connection address. Can be a container ID, e.g., `http://fastgpt-minio:9000`
|
|
- `STORAGE_EXTERNAL_ENDPOINT` An address accessible by both **server** and **client** to reach the bucket. Use a fixed host IP or domain name — don't use `127.0.0.1` or `localhost` (containers can't access loopback addresses). Once configured, the default download mode automatically becomes `presigned`.
|
|
- `STORAGE_S3_FORCE_PATH_STYLE` [Optional] Virtual-hosted-style or path-style routing. If vendor is `minio`, this is fixed to `true`.
|
|
- `STORAGE_S3_MAX_RETRIES` [Optional] Maximum request retry attempts. Default: 3
|
|
|
|
**Complete Example**
|
|
|
|
> If using Sealos object storage, set `STORAGE_VENDOR` to `aws-s3`
|
|
|
|
```dotenv
|
|
STORAGE_VENDOR=minio
|
|
STORAGE_REGION=us-east-1
|
|
STORAGE_ACCESS_KEY_ID=your_access_key
|
|
STORAGE_SECRET_ACCESS_KEY=your_secret_key
|
|
STORAGE_PUBLIC_BUCKET=fastgpt-public
|
|
STORAGE_PRIVATE_BUCKET=fastgpt-private
|
|
STORAGE_EXTERNAL_ENDPOINT=http://127.0.0.1:9000
|
|
STORAGE_S3_ENDPOINT=http://127.0.0.1:9000
|
|
STORAGE_S3_FORCE_PATH_STYLE=true
|
|
STORAGE_S3_MAX_RETRIES=3
|
|
```
|
|
|
|
### Alibaba Cloud OSS
|
|
|
|
> - [CORS Configuration](https://help.aliyun.com/zh/oss/user-guide/configure-cross-origin-resource-sharing/?spm=5176.8466032.console-base_help.dexternal.1bcd1450Wau6J6#b58400ec36rqf)
|
|
|
|
- `STORAGE_OSS_ENDPOINT` Alibaba Cloud OSS hostname. Default is usually `{region}.aliyuncs.com`, e.g., `oss-cn-hangzhou.aliyuncs.com`. If using a custom domain, enter it here, e.g., `your-domain.com`
|
|
- `STORAGE_OSS_CNAME` Whether custom domain is enabled
|
|
- `STORAGE_OSS_SECURE` Whether TLS is enabled. Disable if your domain doesn't have a certificate.
|
|
- `STORAGE_OSS_INTERNAL` [Optional] Whether to use internal network access. Enable if your service is also on Alibaba Cloud to save bandwidth. Default: disabled
|
|
|
|
**Complete Example**
|
|
|
|
```dotenv
|
|
STORAGE_VENDOR=oss
|
|
STORAGE_REGION=oss-cn-hangzhou
|
|
STORAGE_ACCESS_KEY_ID=your_access_key
|
|
STORAGE_SECRET_ACCESS_KEY=your_secret_key
|
|
STORAGE_PUBLIC_BUCKET=fastgpt-public
|
|
STORAGE_PRIVATE_BUCKET=fastgpt-private
|
|
STORAGE_OSS_ENDPOINT=oss-cn-hangzhou.aliyuncs.com
|
|
STORAGE_OSS_CNAME=false
|
|
STORAGE_OSS_SECURE=false
|
|
STORAGE_OSS_INTERNAL=false
|
|
```
|
|
|
|
### Tencent Cloud COS
|
|
|
|
> - [CORS Configuration](https://cloud.tencent.com/document/product/436/13318)
|
|
|
|
- `STORAGE_COS_PROTOCOL` Options: `https:`, `http:` — don't forget the `:`. If your custom domain doesn't have a certificate, don't use `https:`
|
|
- `STORAGE_COS_USE_ACCELERATE` [Optional] Enable global acceleration domain. Default: false. If true, the bucket must have global acceleration enabled.
|
|
- `STORAGE_COS_CNAME_DOMAIN` [Optional] Custom domain, e.g., `your-domain.com`
|
|
- `STORAGE_COS_PROXY` [Optional] Proxy server, e.g., `http://localhost:7897`
|
|
|
|
**Complete Example**
|
|
|
|
```dotenv
|
|
STORAGE_VENDOR=cos
|
|
STORAGE_REGION=ap-shanghai
|
|
STORAGE_ACCESS_KEY_ID=your_access_key
|
|
STORAGE_SECRET_ACCESS_KEY=your_secret_key
|
|
STORAGE_PUBLIC_BUCKET=fastgpt-public
|
|
STORAGE_PRIVATE_BUCKET=fastgpt-private
|
|
STORAGE_COS_PROTOCOL=http:
|
|
STORAGE_COS_USE_ACCELERATE=false
|
|
STORAGE_COS_CNAME_DOMAIN=
|
|
STORAGE_COS_PROXY=
|
|
```
|