13 Commits
1.2.1 ... 1.4.0

Author SHA1 Message Date
longjuan
022ecea94f Fix url space error (#41)
Fixes https://github.com/halo-sigs/plugin-s3/issues/40
![image](https://github.com/halo-sigs/plugin-s3/assets/28662535/03bc4ed8-c539-451f-8a88-99084240038a)

```release-note
None
```
2023-06-01 08:23:13 +00:00
John Niang
b3bdd02e08 Fix incorrect setting on TTL of share URL (#39)
Share URL mechanism was provided in https://github.com/halo-sigs/plugin-s3/pull/35, and I set the TTL of the URL with 5 mins incorrectly.

```release-note
None
```
2023-06-01 08:13:16 +00:00
longjuan
5a95b4ced1 Permalink Adaptation Path Style (#38)
Fixes https://github.com/halo-sigs/plugin-s3/issues/37

```release-note
永久链接根据访问风格进行拼接
```

使用Path Style的策略
修改前:
![image](https://github.com/halo-sigs/plugin-s3/assets/28662535/631b33f8-e534-445b-bf1c-3edbc9a543bc)


修改后:
![image](https://github.com/halo-sigs/plugin-s3/assets/28662535/ca6edbd4-8455-4246-b49b-f12afc3ea020)
2023-05-12 16:52:27 +00:00
John Niang
88490bb80f Support to get shared URL and permalink of attachment in handler (#35)
On the Halo side, PR https://github.com/halo-dev/halo/pull/3740 has already added two new methods (`getSharedURL` and `getPermalink`) into AttachmentHandler. Now It's time to implement these two methods so that users can correctly and easily use these two methods.

This PR mainly implements [new AttachmentHandler](11a5807682/api/src/main/java/run/halo/app/core/extension/attachment/endpoint/AttachmentHandler.java). At the same time, I also refactored the build script for a better development experience.

Please note that, those changes might not influence compatibility with Halo 2.0.0. You can have test against Halo 2.0.0 manually.

/kind feature

```release-note
支持获取分享链接和永久链接
```
2023-04-21 12:33:40 +00:00
John Niang
5e9b9f803b Use S3Client instead of S3AsyncClient to avoid waiting two seconds for closing (#30)
Fixes https://github.com/halo-sigs/plugin-s3/issues/23

```release-note
修复文件上传慢的问题
```
2023-04-06 08:06:15 +00:00
longjuan
c635ebede8 perf: auto rename attachment if it exists (#22)
Fixes https://github.com/halo-dev/halo/issues/3337
不更新依赖了,直接复制了FileNameUtils
在有image.png的情况下再同时粘贴两张截图,期望两张都能被上传且被自动重命名。

![image](https://user-images.githubusercontent.com/28662535/220059741-da25a490-6f6a-4172-a393-aa3f84ab6b38.png)
![image](https://user-images.githubusercontent.com/28662535/220059786-24cda2bb-6faa-4377-8eb8-a70920916f3d.png)

```release-note
文件存在时自动重命名
```
2023-02-25 02:38:14 +00:00
miaodi
459cc1cf94 add oracle cloud configuration guide in README (#20)
增加oracle cloud的配置,实测可以上传。
官方文档地址:https://docs.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm

`Path Style`和`Virtual Hosted Style`均可以配置,并测试成功。
推荐使用`Virtual Hosted Style`方式:

![image](https://user-images.githubusercontent.com/19516717/216295351-5146f5ab-0cf6-43a1-bc6e-ad261c55f198.png)

entrypoint: compat.objectstorage.{region}.oraclecloud.com
将`{region}`替换为上图中`区域`的值

绑定域名留空

![image](https://user-images.githubusercontent.com/19516717/216307619-b54b5829-8341-469d-86b1-dad7e1e65260.png)
`Access Key`和`Access Secret` 在用户设置里面生成`客户秘钥`

```release-note
None
```
2023-02-02 14:20:10 +00:00
SanqianQVQ
780258ffc1 Update README.md (#18)
Added Cloudflare info and use of bright red  for readability.

    None
2023-01-31 10:58:09 +00:00
longjuan
c9f13d4b5f chore: bump version and correct license (#15)
perf: Use async client and multipart upload to avoid out of memory by @longjuan in https://github.com/halo-sigs/plugin-s3/pull/7
feat: add access style options to support minio binding domain names by @longjuan in https://github.com/halo-sigs/plugin-s3/pull/13
feat: check the file already exists before uploading by @longjuan in https://github.com/halo-sigs/plugin-s3/pull/11
```release-note
None
```
2023-01-31 02:22:09 +00:00
longjuan
72af0fcdac chore: add configuration guide in README (#14)
因为增加了访问风格选项,导致配置比较难理解,因此增加配置指南
```release-note
None
```
2023-01-30 02:04:11 +00:00
longjuan
21b752dd25 feat: check the file already exists before uploading (#11)
Fixes https://github.com/halo-dev/halo/issues/2945
```release-note
Add file check with the same name when uploading
```

我这里使用了ConcurrentHashMap避免**同时**上传两个同名文件导致文件覆盖问题,在本地存储策略中是由操作系统保证的
不知道这样是不是一个好方法

S3AsyncClient中没有`doesObjectExist`类似的方法,官方的文档也让用`headObject`捕获异常的办法来判断文件是否存在,详见https://github.com/aws/aws-sdk-java-v2/blob/master/docs/LaunchChangelog.md 中搜索`doesObjectExist`
2023-01-28 06:40:10 +00:00
longjuan
b5c2c50654 feat: add access style options to support minio binding domain names (#13)
Fixes https://github.com/halo-sigs/plugin-s3/issues/12

![image](https://user-images.githubusercontent.com/28662535/213078807-b4f7c877-0e83-4a0f-a87b-871c7a3c73dc.png)
![image](https://user-images.githubusercontent.com/28662535/213078827-b0cd7e93-04af-4f3e-988b-e03db3beb85a.png)

```release-note
add access style options to support minio binding domain names
```

⚠️在发布新版本前在README里增加兼容访问风格相关说明
![image](https://user-images.githubusercontent.com/28662535/213079553-9781c489-b969-4e8f-849e-01f2168f2569.png)
2023-01-28 03:40:09 +00:00
longjuan
1158ea7ae8 perf: Use async client and multipart upload to avoid out of memory (#7)
Fixes https://github.com/halo-sigs/plugin-s3/issues/6
```release-note
Use async client and multipart upload to avoid out of memory
```
2023-01-09 09:48:39 +00:00
10 changed files with 490 additions and 152 deletions

View File

@@ -2,6 +2,68 @@
为 Halo 2.0 提供 S3 协议的对象存储策略,支持阿里云、腾讯云、七牛云等兼容 S3 协议的对象存储服务商
## 使用方法
1. 在 [Releases](https://github.com/halo-sigs/plugin-s3/releases) 下载最新的 JAR 文件。
2. 在 Halo 后台的插件管理上传 JAR 文件进行安装。
3. 进入后台附件管理。
4. 点击右上角的存储策略,在存储策略弹框的右上角可新建 S3 Object Storage 存储策略。
5. 创建完成之后即可在上传的时候选择新创建的 S3 Object Storage 存储策略。
## 配置指南
### Endpoint 访问风格
请根据下方表格中的兼容访问风格选择,若您的服务商不在表格中,请自行查看服务商的 s3 兼容性文档或自行尝试。
> 风格说明:<br/>
> 当Endpoint填写`s3.example.com`时<br/>
> Path StyleSDK将访问`s3.example.com/<bucket-name>/<object-key>`<br/>
> Virtual Hosted StyleSDK将访问`<bucket-name>.s3.example.com/<object-key>`
### Endpoint
此处统一填写**不带** bucket-name 的 EndpointSDK 会自动处理访问风格。
想了解 s3 协议的 Endpoint 的配置可在服务商的文档中搜索 s3、Endpoint 或访问域名等关键词,一般与服务商自己的 Endpoint 相同。
> 例如百度云提供 `s3.bj.bcebos.com` 和 `<bucket-name>.s3.bj.bcebos.com` 两种 Endpoint请填写`s3.bj.bcebos.com`。
### Access Key & Access Secret
与服务商自己 API 的 Access Key 和 Access Secret 相同,详情查看对应服务商的文档。
### Bucket 桶名称
与服务商的控制台中的桶名称一致。
### Region
一般留空即可。
> 若确认过其他配置正确又不能访问,请在服务商的文档中查看并填写英文的 Region例如 `cn-east-1`。
>
> Cloudflare 需要填写均为小写字母的 `auto`。
## 部分对象存储服务商兼容性
|服务商|文档|兼容访问风格|兼容性|
| ----- | ---- | ----- | ----- |
|阿里云|https://help.aliyun.com/document_detail/410748.html|Virtual Hosted Style|✅|
|腾讯云|[https://cloud.tencent.com/document/product/436/41284](https://cloud.tencent.com/document/product/436/41284)|Virtual Hosted Style / <br>Path Style|✅|
|七牛云|https://developer.qiniu.com/kodo/4088/s3-access-domainname|Virtual Hosted Style / <br>Path Style|✅|
|百度云|https://cloud.baidu.com/doc/BOS/s/Fjwvyq9xo|Virtual Hosted Style / <br>Path Style|✅|
|京东云| https://docs.jdcloud.com/cn/object-storage-service/api/regions-and-endpoints |Virtual Hosted Style|✅|
|金山云|https://docs.ksyun.com/documents/6761|Virtual Hosted Style|✅|
|青云|https://docsv3.qingcloud.com/storage/object-storage/s3/intro/|Virtual Hosted Style / <br>Path Style|✅|
|网易数帆|[https://sf.163.com/help/documents/89796157866430464](https://sf.163.com/help/documents/89796157866430464)|Virtual Hosted Style|✅|
|Cloudflare|Cloudflare S3 兼容性API<br>[https://developers.cloudflare.com/r2/data-access/s3-api/](https://developers.cloudflare.com/r2/data-access/s3-api/)|Virtual Hosted Style / <br>Path Style|✅|
| Oracle Cloud |[https://docs.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm](https://docs.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm)|Virtual Hosted Style / <br>Path Style|✅|
|自建minio|\-|Path Style|✅|
|华为云|文档未说明是否兼容,工单反馈不保证兼容性,实际测试可以使用|Virtual Hosted Style|❓|
|Ucloud|只支持 8MB 大小的分片,本插件暂不支持<br>[https://docs.ucloud.cn/ufile/s3/s3\_introduction](https://docs.ucloud.cn/ufile/s3/s3_introduction)|\-|❌|
|又拍云|暂不支持 s3 协议|\-|❌|
## 开发环境
```bash
@@ -31,11 +93,3 @@ plugin:
```
构建完成之后,可以在 `build/libs` 目录得到插件的 JAR 包,在 Halo 后台的插件管理上传即可。
## 使用方法
1. 在 [Releases](https://github.com/halo-sigs/plugin-s3/releases) 下载最新的 JAR 文件。
2. 在 Halo 后台的插件管理上传 JAR 文件进行安装。
3. 进入后台附件管理。
4. 点击右上角的存储策略,在存储策略弹框的右上角可新建 S3 Object Storage 存储策略。
5. 创建完成之后即可在上传的时候选择新创建的 S3 Object Storage 存储策略。

View File

@@ -1,5 +1,6 @@
plugins {
id "io.github.guqing.plugin-development" version "0.0.6-SNAPSHOT"
id "io.github.guqing.plugin-development" version "0.0.7-SNAPSHOT"
id "io.freefair.lombok" version "8.0.0-rc2"
id 'java'
}
@@ -8,7 +9,7 @@ sourceCompatibility = JavaVersion.VERSION_17
repositories {
maven { url 'https://s01.oss.sonatype.org/content/repositories/releases' }
maven { url 'https://repo.spring.io/milestone' }
maven { url 'https://s01.oss.sonatype.org/content/repositories/snapshots/' }
mavenCentral()
}
@@ -23,24 +24,17 @@ jar {
}
dependencies {
compileOnly platform("run.halo.dependencies:halo-dependencies:1.0.0")
implementation platform('run.halo.tools.platform:plugin:2.5.0-SNAPSHOT')
compileOnly 'run.halo.app:api'
compileOnly files("lib/halo-2.0.0-SNAPSHOT-plain.jar")
implementation platform('com.amazonaws:aws-java-sdk-bom:1.12.360')
implementation 'com.amazonaws:aws-java-sdk-s3'
implementation platform('software.amazon.awssdk:bom:2.19.8')
implementation 'software.amazon.awssdk:s3'
implementation "javax.xml.bind:jaxb-api:2.3.1"
implementation "javax.activation:activation:1.1.1"
implementation "org.glassfish.jaxb:jaxb-runtime:2.3.3"
compileOnly 'org.projectlombok:lombok'
annotationProcessor 'org.projectlombok:lombok:1.18.22'
testImplementation platform("run.halo.dependencies:halo-dependencies:1.0.0")
testImplementation files("lib/halo-2.0.0-SNAPSHOT-plain.jar")
testImplementation 'run.halo.app:api'
testImplementation 'org.springframework.boot:spring-boot-starter-test'
testImplementation 'org.junit.jupiter:junit-jupiter-api:5.9.0'
testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.9.0'
}
test {

View File

@@ -1 +1 @@
version=1.2.1-SNAPSHOT
version=1.4.0-SNAPSHOT

View File

@@ -1,5 +1,5 @@
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-7.4-bin.zip
distributionUrl=https\://services.gradle.org/distributions/gradle-8.0.2-bin.zip
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists

Binary file not shown.

View File

@@ -0,0 +1,44 @@
package run.halo.s3os;
import com.google.common.io.Files;
import org.apache.commons.lang3.RandomStringUtils;
import org.apache.commons.lang3.StringUtils;
public final class FileNameUtils {
private FileNameUtils() {
}
public static String removeFileExtension(String filename, boolean removeAllExtensions) {
if (filename == null || filename.isEmpty()) {
return filename;
}
var extPattern = "(?<!^)[.]" + (removeAllExtensions ? ".*" : "[^.]*$");
return filename.replaceAll(extPattern, "");
}
/**
* Append random string after file name.
* <pre>
* Case 1: halo.run -> halo-xyz.run
* Case 2: .run -> xyz.run
* Case 3: halo -> halo-xyz
* </pre>
*
* @param filename is name of file.
* @param length is for generating random string with specific length.
* @return File name with random string.
*/
public static String randomFileName(String filename, int length) {
var nameWithoutExt = Files.getNameWithoutExtension(filename);
var ext = Files.getFileExtension(filename);
var random = RandomStringUtils.randomAlphabetic(length).toLowerCase();
if (StringUtils.isBlank(nameWithoutExt)) {
return random + "." + ext;
}
if (StringUtils.isBlank(ext)) {
return nameWithoutExt + "-" + random;
}
return nameWithoutExt + "-" + random + "." + ext;
}
}

View File

@@ -1,25 +1,30 @@
package run.halo.s3os;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.SdkClientException;
import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.PutObjectResult;
import java.net.URI;
import java.net.URISyntaxException;
import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.nio.file.FileAlreadyExistsException;
import java.time.Duration;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.UUID;
import java.util.concurrent.ConcurrentHashMap;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.lang3.StringUtils;
import org.pf4j.Extension;
import org.springframework.core.io.buffer.DataBufferUtils;
import org.springframework.core.io.buffer.DataBuffer;
import org.springframework.http.MediaType;
import org.springframework.http.MediaTypeFactory;
import org.springframework.lang.Nullable;
import org.springframework.web.server.ServerErrorException;
import org.springframework.web.server.ServerWebInputException;
import org.springframework.web.util.UriUtils;
import reactor.core.Exceptions;
import reactor.core.publisher.Mono;
import reactor.core.scheduler.Schedulers;
import reactor.util.retry.Retry;
import run.halo.app.core.extension.attachment.Attachment;
import run.halo.app.core.extension.attachment.Attachment.AttachmentSpec;
import run.halo.app.core.extension.attachment.Constant;
@@ -28,76 +33,131 @@ import run.halo.app.core.extension.attachment.endpoint.AttachmentHandler;
import run.halo.app.extension.ConfigMap;
import run.halo.app.extension.Metadata;
import run.halo.app.infra.utils.JsonUtils;
import java.io.IOException;
import java.io.PipedInputStream;
import java.io.PipedOutputStream;
import java.nio.charset.StandardCharsets;
import java.util.Map;
import java.util.UUID;
import java.util.function.Supplier;
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
import software.amazon.awssdk.awscore.presigner.SdkPresigner;
import software.amazon.awssdk.core.SdkResponse;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.http.SdkHttpResponse;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.S3Configuration;
import software.amazon.awssdk.services.s3.model.CompleteMultipartUploadRequest;
import software.amazon.awssdk.services.s3.model.CompletedMultipartUpload;
import software.amazon.awssdk.services.s3.model.CompletedPart;
import software.amazon.awssdk.services.s3.model.CreateMultipartUploadRequest;
import software.amazon.awssdk.services.s3.model.DeleteObjectRequest;
import software.amazon.awssdk.services.s3.model.GetObjectRequest;
import software.amazon.awssdk.services.s3.model.HeadObjectRequest;
import software.amazon.awssdk.services.s3.model.HeadObjectResponse;
import software.amazon.awssdk.services.s3.model.NoSuchKeyException;
import software.amazon.awssdk.services.s3.model.UploadPartRequest;
import software.amazon.awssdk.services.s3.presigner.S3Presigner;
import software.amazon.awssdk.services.s3.presigner.model.GetObjectPresignRequest;
import software.amazon.awssdk.utils.SdkAutoCloseable;
@Slf4j
@Extension
public class S3OsAttachmentHandler implements AttachmentHandler {
private static final String OBJECT_KEY = "s3os.plugin.halo.run/object-key";
private static final int MULTIPART_MIN_PART_SIZE = 5 * 1024 * 1024;
private final Map<String, Object> uploadingFile = new ConcurrentHashMap<>();
@Override
public Mono<Attachment> upload(UploadContext uploadContext) {
return Mono.just(uploadContext).filter(context -> this.shouldHandle(context.policy()))
.flatMap(context -> {
final var properties = getProperties(context.configMap());
return upload(context, properties).map(
objectDetail -> this.buildAttachment(context, properties, objectDetail));
return upload(context, properties)
.subscribeOn(Schedulers.boundedElastic())
.map(objectDetail -> this.buildAttachment(properties, objectDetail));
});
}
@Override
public Mono<Attachment> delete(DeleteContext deleteContext) {
return Mono.just(deleteContext).filter(context -> this.shouldHandle(context.policy()))
.doOnNext(context -> {
var annotations = context.attachment().getMetadata().getAnnotations();
if (annotations == null || !annotations.containsKey(OBJECT_KEY)) {
return;
.flatMap(context -> {
var objectKey = getObjectKey(context.attachment());
if (objectKey == null) {
return Mono.just(context);
}
var objectName = annotations.get(OBJECT_KEY);
var properties = getProperties(deleteContext.configMap());
var client = buildOsClient(properties);
ossExecute(() -> {
log.info("{}/{} is being deleted from S3ObjectStorage", properties.getBucket(),
objectName);
client.deleteObject(properties.getBucket(), objectName);
log.info("{}/{} was deleted successfully from S3ObjectStorage", properties.getBucket(),
objectName);
return null;
}, client::shutdown);
}).map(DeleteContext::attachment);
return Mono.using(() -> buildS3Client(properties),
client -> Mono.fromCallable(
() -> client.deleteObject(DeleteObjectRequest.builder()
.bucket(properties.getBucket())
.key(objectKey)
.build())).subscribeOn(Schedulers.boundedElastic()),
S3Client::close)
.doOnNext(response -> {
checkResult(response, "delete object");
log.info("Delete object {} from bucket {} successfully",
objectKey, properties.getBucket());
})
.thenReturn(context);
})
.map(DeleteContext::attachment);
}
<T> T ossExecute(Supplier<T> runnable, Runnable finalizer) {
try {
return runnable.get();
} catch (AmazonServiceException ase) {
log.error("""
Caught an AmazonServiceException, which means your request made it to S3ObjectStorage, but was
rejected with an error response for some reason.
Error message: {}
""", ase.getMessage());
throw Exceptions.propagate(ase);
} catch (SdkClientException sce) {
log.error("""
Caught an SdkClientException, which means the client encountered a serious internal
problem while trying to communicate with S3ObjectStorage, such as not being able to access
the network.
Error message: {}
""", sce.getMessage());
throw Exceptions.propagate(sce);
} finally {
if (finalizer != null) {
finalizer.run();
}
@Override
public Mono<URI> getSharedURL(Attachment attachment, Policy policy, ConfigMap configMap,
Duration ttl) {
if (!this.shouldHandle(policy)) {
return Mono.empty();
}
var objectKey = getObjectKey(attachment);
if (objectKey == null) {
return Mono.error(new IllegalArgumentException(
"Cannot obtain object key from attachment " + attachment.getMetadata().getName()));
}
var properties = getProperties(configMap);
return Mono.using(() -> buildS3Presigner(properties),
s3Presigner -> {
var getObjectRequest = GetObjectRequest.builder()
.bucket(properties.getBucket())
.key(objectKey)
.build();
var presignedRequest = GetObjectPresignRequest.builder()
.signatureDuration(ttl)
.getObjectRequest(getObjectRequest)
.build();
var presignedGetObjectRequest = s3Presigner.presignGetObject(presignedRequest);
var presignedURL = presignedGetObjectRequest.url();
try {
return Mono.just(presignedURL.toURI());
} catch (URISyntaxException e) {
return Mono.error(
new RuntimeException("Failed to convert URL " + presignedURL + " to URI."));
}
},
SdkPresigner::close)
.subscribeOn(Schedulers.boundedElastic());
}
@Override
public Mono<URI> getPermalink(Attachment attachment, Policy policy, ConfigMap configMap) {
if (!this.shouldHandle(policy)) {
return Mono.empty();
}
var objectKey = getObjectKey(attachment);
if (objectKey == null) {
return Mono.error(new IllegalArgumentException(
"Cannot obtain object key from attachment " + attachment.getMetadata().getName()));
}
var properties = getProperties(configMap);
var objectURL = getObjectURL(properties, objectKey);
return Mono.just(URI.create(objectURL));
}
@Nullable
private String getObjectKey(Attachment attachment) {
var annotations = attachment.getMetadata().getAnnotations();
if (annotations == null) {
return null;
}
return annotations.get(OBJECT_KEY);
}
S3OsProperties getProperties(ConfigMap configMap) {
@@ -105,95 +165,237 @@ public class S3OsAttachmentHandler implements AttachmentHandler {
return JsonUtils.jsonToObject(settingJson, S3OsProperties.class);
}
Attachment buildAttachment(UploadContext uploadContext, S3OsProperties properties,
ObjectDetail objectDetail) {
String externalLink;
if (StringUtils.isBlank(properties.getDomain())) {
var host = properties.getBucket() + "." + properties.getEndpoint();
externalLink = properties.getProtocol() + "://" + host + "/" + objectDetail.objectName();
} else {
externalLink = properties.getProtocol() + "://" + properties.getDomain() + "/" + objectDetail.objectName();
}
Attachment buildAttachment(S3OsProperties properties, ObjectDetail objectDetail) {
String externalLink = getObjectURL(properties, objectDetail.uploadState.objectKey);
var metadata = new Metadata();
metadata.setName(UUID.randomUUID().toString());
metadata.setAnnotations(
Map.of(OBJECT_KEY, objectDetail.objectName(), Constant.EXTERNAL_LINK_ANNO_KEY,
UriUtils.encodePath(externalLink, StandardCharsets.UTF_8)));
metadata.setAnnotations(new HashMap<>(
Map.of(OBJECT_KEY, objectDetail.uploadState.objectKey,
Constant.EXTERNAL_LINK_ANNO_KEY, externalLink)));
var objectMetadata = objectDetail.objectMetadata();
var spec = new AttachmentSpec();
spec.setSize(objectMetadata.getContentLength());
spec.setDisplayName(uploadContext.file().filename());
spec.setMediaType(objectMetadata.getContentType());
spec.setSize(objectMetadata.contentLength());
spec.setDisplayName(objectDetail.uploadState.fileName);
spec.setMediaType(objectMetadata.contentType());
var attachment = new Attachment();
attachment.setMetadata(metadata);
attachment.setSpec(spec);
log.info("Upload object {} to bucket {} successfully", objectDetail.uploadState.objectKey,
properties.getBucket());
return attachment;
}
AmazonS3 buildOsClient(S3OsProperties properties) {
return AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(
new BasicAWSCredentials(properties.getAccessKey(), properties.getAccessSecret())))
.withEndpointConfiguration(
new AwsClientBuilder.EndpointConfiguration(
properties.getEndpointProtocol() + "://" + properties.getEndpoint(),
properties.getRegion()))
.withPathStyleAccessEnabled(false)
.withChunkedEncodingDisabled(true)
.build();
private String getObjectURL(S3OsProperties properties, String objectKey) {
String objectURL;
if (StringUtils.isBlank(properties.getDomain())) {
String host;
if (properties.getEnablePathStyleAccess()) {
host = properties.getEndpoint() + "/" + properties.getBucket();
} else {
host = properties.getBucket() + "." + properties.getEndpoint();
}
objectURL = properties.getProtocol() + "://" + host + "/" + objectKey;
} else {
objectURL = properties.getProtocol() + "://" + properties.getDomain() + "/" + objectKey;
}
return UriUtils.encodePath(objectURL, StandardCharsets.UTF_8);
}
S3Client buildS3Client(S3OsProperties properties) {
return S3Client.builder()
.region(Region.of(properties.getRegion()))
.endpointOverride(
URI.create(properties.getEndpointProtocol() + "://" + properties.getEndpoint()))
.credentialsProvider(() -> AwsBasicCredentials.create(properties.getAccessKey(),
properties.getAccessSecret()))
.serviceConfiguration(S3Configuration.builder()
.chunkedEncodingEnabled(false)
.pathStyleAccessEnabled(properties.getEnablePathStyleAccess())
.build())
.build();
}
private S3Presigner buildS3Presigner(S3OsProperties properties) {
return S3Presigner.builder()
.region(Region.of(properties.getRegion()))
.endpointOverride(
URI.create(properties.getEndpointProtocol() + "://" + properties.getEndpoint()))
.credentialsProvider(() -> AwsBasicCredentials.create(properties.getAccessKey(),
properties.getAccessSecret()))
.serviceConfiguration(S3Configuration.builder()
.chunkedEncodingEnabled(false)
.pathStyleAccessEnabled(properties.getEnablePathStyleAccess())
.build())
.build();
}
Mono<ObjectDetail> upload(UploadContext uploadContext, S3OsProperties properties) {
return Mono.fromCallable(() -> {
var client = buildOsClient(properties);
// build object name
var originFilename = uploadContext.file().filename();
var objectName = properties.getObjectName(originFilename);
return Mono.using(() -> buildS3Client(properties),
client -> {
var uploadState = new UploadState(properties, uploadContext.file().filename());
return checkFileExistsAndRename(uploadState, client)
// init multipart upload
.flatMap(state -> Mono.fromCallable(() -> client.createMultipartUpload(
CreateMultipartUploadRequest.builder()
.bucket(properties.getBucket())
.contentType(state.contentType)
.key(state.objectKey)
.build())).subscribeOn(Schedulers.boundedElastic()))
.flatMapMany((response) -> {
checkResult(response, "createMultipartUpload");
uploadState.uploadId = response.uploadId();
return uploadContext.file().content();
})
// buffer to part
.windowUntil((buffer) -> {
uploadState.buffered += buffer.readableByteCount();
if (uploadState.buffered >= MULTIPART_MIN_PART_SIZE) {
uploadState.buffered = 0;
return true;
} else {
return false;
}
})
// upload part
.concatMap((window) -> window.collectList().flatMap((bufferList) -> {
var buffer = S3OsAttachmentHandler.concatBuffers(bufferList);
return uploadPart(uploadState, buffer, client);
}))
.reduce(uploadState, (state, completedPart) -> {
state.completedParts.put(completedPart.partNumber(), completedPart);
return state;
})
// complete multipart upload
.flatMap((state) -> Mono.just(client.completeMultipartUpload(
CompleteMultipartUploadRequest
.builder()
.bucket(properties.getBucket())
.uploadId(state.uploadId)
.multipartUpload(CompletedMultipartUpload.builder()
.parts(state.completedParts.values())
.build())
.key(state.objectKey)
.build())
))
// get object metadata
.flatMap((response) -> {
checkResult(response, "completeUpload");
return Mono.just(client.headObject(
HeadObjectRequest.builder()
.bucket(properties.getBucket())
.key(uploadState.objectKey)
.build()
));
})
// build object detail
.map((response) -> {
checkResult(response, "getMetadata");
return new ObjectDetail(uploadState, response);
})
// close client
.doFinally((signalType) -> {
if (uploadState.needRemoveMapKey) {
uploadingFile.remove(uploadState.getUploadingMapKey());
}
});
},
SdkAutoCloseable::close);
}
var pos = new PipedOutputStream();
var pis = new PipedInputStream(pos);
DataBufferUtils.write(uploadContext.file().content(), pos)
.subscribeOn(Schedulers.boundedElastic()).doOnComplete(() -> {
try {
pos.close();
} catch (IOException ioe) {
// close the stream quietly
log.warn("Failed to close output stream", ioe);
}
}).subscribe(DataBufferUtils.releaseConsumer());
final var bucket = properties.getBucket();
var metadata = new ObjectMetadata();
var contentType = MediaTypeFactory.getMediaType(originFilename)
.orElse(MediaType.APPLICATION_OCTET_STREAM).toString();
metadata.setContentType(contentType);
var request = new PutObjectRequest(bucket, objectName, pis, metadata);
log.info("Uploading {} into S3ObjectStorage {}/{}/{}", originFilename,
properties.getEndpoint(), bucket, objectName);
return ossExecute(() -> {
var result = client.putObject(request);
if (log.isDebugEnabled()) {
debug(result);
private Mono<UploadState> checkFileExistsAndRename(UploadState uploadState,
S3Client s3client) {
return Mono.defer(() -> {
// deduplication of uploading files
if (uploadingFile.put(uploadState.getUploadingMapKey(),
uploadState.getUploadingMapKey()) != null) {
return Mono.error(new FileAlreadyExistsException("文件 " + uploadState.objectKey
+
" 已存在,建议更名后重试。[local]"));
}
var objectMetadata = client.getObjectMetadata(bucket, objectName);
return new ObjectDetail(bucket, objectName, objectMetadata);
}, client::shutdown);
}).subscribeOn(Schedulers.boundedElastic());
uploadState.needRemoveMapKey = true;
// check whether file exists
return Mono.fromSupplier(() -> s3client.headObject(HeadObjectRequest.builder()
.bucket(uploadState.properties.getBucket())
.key(uploadState.objectKey)
.build()))
.onErrorResume(NoSuchKeyException.class, e -> {
var builder = HeadObjectResponse.builder();
builder.sdkHttpResponse(SdkHttpResponse.builder().statusCode(404).build());
return Mono.just(builder.build());
})
.flatMap(response -> {
if (response != null && response.sdkHttpResponse() != null
&& response.sdkHttpResponse().isSuccessful()) {
return Mono.error(
new FileAlreadyExistsException("文件 " + uploadState.objectKey
+ " 已存在,建议更名后重试。[remote]"));
} else {
return Mono.just(uploadState);
}
});
})
.retryWhen(Retry.max(3)
.filter(FileAlreadyExistsException.class::isInstance)
.doAfterRetry((retrySignal) -> {
if (uploadState.needRemoveMapKey) {
uploadingFile.remove(uploadState.getUploadingMapKey());
uploadState.needRemoveMapKey = false;
}
uploadState.randomFileName();
})
)
.onErrorMap(Exceptions::isRetryExhausted,
throwable -> new ServerWebInputException(throwable.getCause().getMessage()));
}
void debug(PutObjectResult result) {
log.debug("""
PutObjectResult: VersionId: {}, ETag: {}, ContentMd5: {}, ExpirationTime: {}, ExpirationTimeRuleId: {},
response RawMetadata: {}, UserMetadata: {}
""", result.getVersionId(), result.getETag(), result.getContentMd5(), result.getExpirationTime(),
result.getExpirationTimeRuleId(), result.getMetadata().getRawMetadata(),
result.getMetadata().getUserMetadata());
private Mono<CompletedPart> uploadPart(UploadState uploadState, ByteBuffer buffer,
S3Client s3client) {
final int partNumber = ++uploadState.partCounter;
return Mono.just(s3client.uploadPart(UploadPartRequest.builder()
.bucket(uploadState.properties.getBucket())
.key(uploadState.objectKey)
.partNumber(partNumber)
.uploadId(uploadState.uploadId)
.contentLength((long) buffer.capacity())
.build(),
RequestBody.fromByteBuffer(buffer)))
.map((uploadPartResult) -> {
checkResult(uploadPartResult, "uploadPart");
return CompletedPart.builder()
.eTag(uploadPartResult.eTag())
.partNumber(partNumber)
.build();
});
}
private static void checkResult(SdkResponse result, String operation) {
log.info("operation: {}, result: {}", operation, result);
if (result.sdkHttpResponse() == null || !result.sdkHttpResponse().isSuccessful()) {
log.error("Failed to upload object, response: {}", result.sdkHttpResponse());
throw new ServerErrorException("对象存储响应错误无法将对象上传到S3对象存储", null);
}
}
private static ByteBuffer concatBuffers(List<DataBuffer> buffers) {
int partSize = 0;
for (DataBuffer b : buffers) {
partSize += b.readableByteCount();
}
ByteBuffer partData = ByteBuffer.allocate(partSize);
buffers.forEach((buffer) -> partData.put(buffer.toByteBuffer()));
// Reset read pointer to first byte
partData.rewind();
return partData;
}
boolean shouldHandle(Policy policy) {
if (policy == null || policy.getSpec() == null ||
policy.getSpec().getTemplateName() == null) {
@@ -203,7 +405,38 @@ public class S3OsAttachmentHandler implements AttachmentHandler {
return "s3os".equals(templateName);
}
record ObjectDetail(String bucketName, String objectName, ObjectMetadata objectMetadata) {
record ObjectDetail(UploadState uploadState, HeadObjectResponse objectMetadata) {
}
static class UploadState {
final S3OsProperties properties;
final String originalFileName;
String uploadId;
int partCounter;
Map<Integer, CompletedPart> completedParts = new HashMap<>();
int buffered = 0;
String contentType;
String fileName;
String objectKey;
boolean needRemoveMapKey = false;
public UploadState(S3OsProperties properties, String fileName) {
this.properties = properties;
this.originalFileName = fileName;
this.fileName = fileName;
this.objectKey = properties.getObjectName(fileName);
this.contentType = MediaTypeFactory.getMediaType(fileName)
.orElse(MediaType.APPLICATION_OCTET_STREAM).toString();
}
public String getUploadingMapKey() {
return properties.getBucket() + "/" + objectKey;
}
public void randomFileName() {
this.fileName = FileNameUtils.randomFileName(originalFileName, 4);
this.objectKey = properties.getObjectName(fileName);
}
}
}

View File

@@ -10,6 +10,8 @@ class S3OsProperties {
private Protocol endpointProtocol = Protocol.https;
private Boolean enablePathStyleAccess = false;
private String endpoint;
private String accessKey;

View File

@@ -27,9 +27,20 @@ spec:
- label: HTTP
value: http
validation: required
- $formkit: select
name: enablePathStyleAccess
label: Endpoint 访问风格
options:
- label: Virtual Hosted Style
value: false
- label: Path Style
value: true
value: false
validation: required
- $formkit: text
name: endpoint
label: EndPoint
placeholder: 请填写不带bucket-name的Endpoint
validation: required
help: 协议头请在上方设置,此处无需以"http://"或"https://"开头,系统会自动拼接
- $formkit: password

View File

@@ -4,7 +4,7 @@ metadata:
name: PluginS3ObjectStorage
spec:
enabled: true
version: 1.2.1
version: 1.4.0
requires: ">=2.0.0"
author:
name: longjuan
@@ -16,4 +16,4 @@ spec:
displayName: "对象存储Amazon S3 协议)"
description: "提供兼容 Amazon S3 协议的对象存储策略,兼容阿里云、腾讯云、七牛云等"
license:
- name: "MIT"
- name: "GPL-3.0"