mirror of
https://github.com/labring/FastGPT.git
synced 2026-04-30 02:00:50 +08:00
docs(i18n): translate final 9 files in introduction directory (#6471)
* docs(i18n): translate batch 1 * docs(i18n): translate batch 2 * docs(i18n): translate batch 3 (20 files) - openapi/: app, share - faq/: all 8 files - use-cases/: index, external-integration (5 files), app-cases (4 files) Translated using North American style with natural, concise language. Preserved MDX syntax, code blocks, images, and component imports. * docs(i18n): translate protocol docs * docs(i18n): translate introduction docs (part 1) * docs(i18n): translate use-cases docs * docs(i18n): translate introduction docs (part 2 - batch 1) * docs(i18n): translate final 9 files * fix(i18n): fix YAML and MDX syntax errors in translated files - Add quotes to description with colon in submit_application_template.en.mdx - Remove duplicate Chinese content in translate-subtitle-using-gpt.en.mdx - Fix unclosed details tag issue * docs(i18n): translate all meta.json navigation files * fix(i18n): translate Chinese separators in meta.en.json files * translate * translate * i18n --------- Co-authored-by: archer <archer@archerdeMac-mini.local> Co-authored-by: archer <545436317@qq.com>
This commit is contained in:
@@ -0,0 +1,50 @@
|
||||
---
|
||||
title: Cloudflare Worker Proxy
|
||||
description: Use Cloudflare Worker as a Proxy
|
||||
---
|
||||
|
||||
[Reference tutorial by "不做了睡觉"](https://gravel-twister-d32.notion.site/FastGPT-API-ba7bb261d5fd4fd9bbb2f0607dacdc9e)
|
||||
|
||||
**Workers configuration file**
|
||||
|
||||
```js
|
||||
const TELEGRAPH_URL = 'https://api.openai.com';
|
||||
|
||||
addEventListener('fetch', (event) => {
|
||||
event.respondWith(handleRequest(event.request));
|
||||
});
|
||||
|
||||
async function handleRequest(request) {
|
||||
// Security check
|
||||
if (request.headers.get('auth') !== 'auth_code') {
|
||||
return new Response('UnAuthorization', { status: 403 });
|
||||
}
|
||||
|
||||
const url = new URL(request.url);
|
||||
url.host = TELEGRAPH_URL.replace(/^https?:\/\//, '');
|
||||
|
||||
const modifiedRequest = new Request(url.toString(), {
|
||||
headers: request.headers,
|
||||
method: request.method,
|
||||
body: request.body,
|
||||
redirect: 'follow'
|
||||
});
|
||||
|
||||
const response = await fetch(modifiedRequest);
|
||||
const modifiedResponse = new Response(response.body, response);
|
||||
|
||||
// Add CORS headers
|
||||
modifiedResponse.headers.set('Access-Control-Allow-Origin', '*');
|
||||
|
||||
return modifiedResponse;
|
||||
}
|
||||
```
|
||||
|
||||
**Update FastGPT environment variables**
|
||||
|
||||
> Don't forget to include v1!
|
||||
|
||||
```bash
|
||||
OPENAI_BASE_URL=https://xxxxxx/v1
|
||||
OPENAI_BASE_URL_AUTH=auth_code
|
||||
```
|
||||
@@ -0,0 +1,42 @@
|
||||
---
|
||||
title: HTTP Proxy
|
||||
description: Use an HTTP Proxy for Routing
|
||||
---
|
||||
|
||||
If you have a proxy tool (like [Clash](https://github.com/Dreamacro/clash) or [sing-box](https://github.com/SagerNet/sing-box)), you can use an HTTP proxy to access OpenAI. Just add these two environment variables:
|
||||
|
||||
```bash
|
||||
AXIOS_PROXY_HOST=
|
||||
AXIOS_PROXY_PORT=
|
||||
```
|
||||
|
||||
Using Clash as an example, it's recommended to route only `api.openai.com` through the proxy and direct-connect everything else. Example configuration:
|
||||
|
||||
```yaml
|
||||
mixed-port: 7890
|
||||
allow-lan: false
|
||||
bind-address: '*'
|
||||
mode: rule
|
||||
log-level: warning
|
||||
dns:
|
||||
enable: true
|
||||
ipv6: false
|
||||
nameserver:
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
cache-size: 400
|
||||
proxies:
|
||||
-
|
||||
proxy-groups:
|
||||
- { name: '♻️ Auto Select', type: url-test, proxies: [HK-V01×1.5], url: 'https://api.openai.com', interval: 3600}
|
||||
rules:
|
||||
- 'DOMAIN-SUFFIX,api.openai.com,♻️ Auto Select'
|
||||
- 'MATCH,DIRECT'
|
||||
```
|
||||
|
||||
Then add these two environment variables to FastGPT:
|
||||
|
||||
```bash
|
||||
AXIOS_PROXY_HOST=127.0.0.1
|
||||
AXIOS_PROXY_PORT=7890
|
||||
```
|
||||
@@ -0,0 +1,9 @@
|
||||
{
|
||||
"title": "Proxy Solutions",
|
||||
"description": "FastGPT private deployment proxy solutions",
|
||||
"pages": [
|
||||
"nginx",
|
||||
"http_proxy",
|
||||
"cloudflare"
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,101 @@
|
||||
---
|
||||
title: Nginx Proxy
|
||||
description: Deploy Nginx on Sealos as a Proxy
|
||||
---
|
||||
|
||||
## Log in to Sealos
|
||||
|
||||
[Sealos](https://cloud.sealos.io?uid=fnWRt09fZP)
|
||||
|
||||
## Create an Application
|
||||
|
||||
Open "App Launchpad" and click "New Application":
|
||||
|
||||

|
||||

|
||||
|
||||
### Fill in Basic Configuration
|
||||
|
||||
Make sure to enable external access and copy the provided external access address.
|
||||
|
||||

|
||||
|
||||
### Add Configuration File
|
||||
|
||||
1. Copy the configuration below. Replace the content after `server_name` with the external access address from step 2.
|
||||
|
||||
```nginx
|
||||
user nginx;
|
||||
worker_processes auto;
|
||||
worker_rlimit_nofile 51200;
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
resolver 8.8.8.8;
|
||||
proxy_ssl_server_name on;
|
||||
|
||||
access_log off;
|
||||
server_names_hash_bucket_size 512;
|
||||
client_header_buffer_size 64k;
|
||||
large_client_header_buffers 4 64k;
|
||||
client_max_body_size 50M;
|
||||
|
||||
proxy_connect_timeout 240s;
|
||||
proxy_read_timeout 240s;
|
||||
proxy_buffer_size 128k;
|
||||
proxy_buffers 4 256k;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name tgohwtdlrmer.cloud.sealos.io; # Replace with the Sealos external address
|
||||
|
||||
location ~ /openai/(.*) {
|
||||
proxy_pass https://api.openai.com/$1$is_args$args;
|
||||
proxy_set_header Host api.openai.com;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
# For streaming responses
|
||||
proxy_set_header Connection '';
|
||||
proxy_http_version 1.1;
|
||||
chunked_transfer_encoding off;
|
||||
proxy_buffering off;
|
||||
proxy_cache off;
|
||||
# For regular responses
|
||||
proxy_buffer_size 128k;
|
||||
proxy_buffers 4 256k;
|
||||
proxy_busy_buffers_size 256k;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. Open Advanced Configuration.
|
||||
3. Click "Add Config File".
|
||||
4. File name: `/etc/nginx/nginx.conf`.
|
||||
5. File value: the code you just copied.
|
||||
6. Click Confirm.
|
||||
|
||||

|
||||
|
||||
### Deploy the Application
|
||||
|
||||
After filling everything in, click "Deploy" in the upper right corner to complete deployment.
|
||||
|
||||
## Update FastGPT Environment Variables
|
||||
|
||||
1. Go to the deployed app's details and copy the external address.
|
||||
|
||||
> Note: This is an API address — opening it directly in a browser won't work. To verify, visit: `*.cloud.sealos.io/openai/api`. If you see `Invalid URL (GET /api)`, it's working correctly.
|
||||
|
||||

|
||||
|
||||
2. Update the environment variable (this is FastGPT's environment variable, not Sealos'):
|
||||
|
||||
```bash
|
||||
OPENAI_BASE_URL=https://tgohwtdlrmer.cloud.sealos.io/openai/v1
|
||||
```
|
||||
|
||||
**Done!**
|
||||
Reference in New Issue
Block a user