mirror of
https://github.com/halo-dev/plugin-s3.git
synced 2025-10-16 07:19:45 +00:00
Compare commits
18 Commits
Author | SHA1 | Date | |
---|---|---|---|
![]() |
c109bbd61f | ||
![]() |
2320800907 | ||
![]() |
00537c164c | ||
![]() |
8be39b9898 | ||
![]() |
022ecea94f | ||
![]() |
b3bdd02e08 | ||
![]() |
5a95b4ced1 | ||
![]() |
88490bb80f | ||
![]() |
5e9b9f803b | ||
![]() |
c635ebede8 | ||
![]() |
459cc1cf94 | ||
![]() |
780258ffc1 | ||
![]() |
c9f13d4b5f | ||
![]() |
72af0fcdac | ||
![]() |
21b752dd25 | ||
![]() |
b5c2c50654 | ||
![]() |
1158ea7ae8 | ||
![]() |
11087a9915 |
74
README.md
74
README.md
@@ -2,6 +2,72 @@
|
|||||||
|
|
||||||
为 Halo 2.0 提供 S3 协议的对象存储策略,支持阿里云、腾讯云、七牛云等兼容 S3 协议的对象存储服务商
|
为 Halo 2.0 提供 S3 协议的对象存储策略,支持阿里云、腾讯云、七牛云等兼容 S3 协议的对象存储服务商
|
||||||
|
|
||||||
|
## 使用方法
|
||||||
|
|
||||||
|
1. 在 [Releases](https://github.com/halo-sigs/plugin-s3/releases) 下载最新的 JAR 文件。
|
||||||
|
2. 在 Halo 后台的插件管理上传 JAR 文件进行安装。
|
||||||
|
3. 进入后台附件管理。
|
||||||
|
4. 点击右上角的存储策略,在存储策略弹框的右上角可新建 S3 Object Storage 存储策略。
|
||||||
|
5. 创建完成之后即可在上传的时候选择新创建的 S3 Object Storage 存储策略。
|
||||||
|
|
||||||
|
## 配置指南
|
||||||
|
|
||||||
|
### Endpoint 访问风格
|
||||||
|
|
||||||
|
请根据下方表格中的兼容访问风格选择,若您的服务商不在表格中,请自行查看服务商的 s3 兼容性文档或自行尝试。
|
||||||
|
|
||||||
|
> 风格说明:<br/>
|
||||||
|
> 当Endpoint填写`s3.example.com`时<br/>
|
||||||
|
> Path Style:SDK将访问`s3.example.com/<bucket-name>/<object-key>`<br/>
|
||||||
|
> Virtual Hosted Style:SDK将访问`<bucket-name>.s3.example.com/<object-key>`
|
||||||
|
|
||||||
|
### Endpoint
|
||||||
|
|
||||||
|
此处统一填写**不带** bucket-name 的 Endpoint,SDK 会自动处理访问风格。
|
||||||
|
|
||||||
|
想了解 s3 协议的 Endpoint 的配置可在服务商的文档中搜索 s3、Endpoint 或访问域名等关键词,一般与服务商自己的 Endpoint 相同。
|
||||||
|
|
||||||
|
> 例如百度云提供 `s3.bj.bcebos.com` 和 `<bucket-name>.s3.bj.bcebos.com` 两种 Endpoint,请填写`s3.bj.bcebos.com`。
|
||||||
|
|
||||||
|
### Access Key & Access Secret
|
||||||
|
|
||||||
|
与服务商自己 API 的 Access Key 和 Access Secret 相同,详情查看对应服务商的文档。
|
||||||
|
|
||||||
|
### Bucket 桶名称
|
||||||
|
|
||||||
|
一般与服务商控制台中的空间名称一致。
|
||||||
|
|
||||||
|
> 注意部分服务商 s3 空间名 ≠ 空间名称,若出现“Access Denied”报错可检查 Bucket 是否正确。
|
||||||
|
>
|
||||||
|
> 可通过 S3Browser 查看桶列表,七牛云也可在“开发者平台-对象存储-空间概览-s3域名”中查看 s3 空间名。
|
||||||
|
|
||||||
|
### Region
|
||||||
|
|
||||||
|
一般留空即可。
|
||||||
|
|
||||||
|
> 若确认过其他配置正确又不能访问,请在服务商的文档中查看并填写英文的 Region,例如 `cn-east-1`。
|
||||||
|
>
|
||||||
|
> Cloudflare 需要填写均为小写字母的 `auto`。
|
||||||
|
|
||||||
|
## 部分对象存储服务商兼容性
|
||||||
|
|
||||||
|
|服务商|文档|兼容访问风格|兼容性|
|
||||||
|
| ----- | ---- | ----- | ----- |
|
||||||
|
|阿里云|https://help.aliyun.com/document_detail/410748.html|Virtual Hosted Style|✅|
|
||||||
|
|腾讯云|[https://cloud.tencent.com/document/product/436/41284](https://cloud.tencent.com/document/product/436/41284)|Virtual Hosted Style / <br>Path Style|✅|
|
||||||
|
|七牛云|https://developer.qiniu.com/kodo/4088/s3-access-domainname|Virtual Hosted Style / <br>Path Style|✅|
|
||||||
|
|百度云|https://cloud.baidu.com/doc/BOS/s/Fjwvyq9xo|Virtual Hosted Style / <br>Path Style|✅|
|
||||||
|
|京东云| https://docs.jdcloud.com/cn/object-storage-service/api/regions-and-endpoints |Virtual Hosted Style|✅|
|
||||||
|
|金山云|https://docs.ksyun.com/documents/6761|Virtual Hosted Style|✅|
|
||||||
|
|青云|https://docsv3.qingcloud.com/storage/object-storage/s3/intro/|Virtual Hosted Style / <br>Path Style|✅|
|
||||||
|
|网易数帆|[https://sf.163.com/help/documents/89796157866430464](https://sf.163.com/help/documents/89796157866430464)|Virtual Hosted Style|✅|
|
||||||
|
|Cloudflare|Cloudflare S3 兼容性API<br>[https://developers.cloudflare.com/r2/data-access/s3-api/](https://developers.cloudflare.com/r2/data-access/s3-api/)|Virtual Hosted Style / <br>Path Style|✅|
|
||||||
|
| Oracle Cloud |[https://docs.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm](https://docs.oracle.com/en-us/iaas/Content/Object/Tasks/s3compatibleapi.htm)|Virtual Hosted Style / <br>Path Style|✅|
|
||||||
|
|自建minio|\-|Path Style|✅|
|
||||||
|
|华为云|文档未说明是否兼容,工单反馈不保证兼容性,实际测试可以使用|Virtual Hosted Style|❓|
|
||||||
|
|Ucloud|只支持 8MB 大小的分片,本插件暂不支持<br>[https://docs.ucloud.cn/ufile/s3/s3\_introduction](https://docs.ucloud.cn/ufile/s3/s3_introduction)|\-|❌|
|
||||||
|
|又拍云|暂不支持 s3 协议|\-|❌|
|
||||||
|
|
||||||
## 开发环境
|
## 开发环境
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -31,11 +97,3 @@ plugin:
|
|||||||
```
|
```
|
||||||
|
|
||||||
构建完成之后,可以在 `build/libs` 目录得到插件的 JAR 包,在 Halo 后台的插件管理上传即可。
|
构建完成之后,可以在 `build/libs` 目录得到插件的 JAR 包,在 Halo 后台的插件管理上传即可。
|
||||||
|
|
||||||
## 使用方法
|
|
||||||
|
|
||||||
1. 在 [Releases](https://github.com/halo-sigs/plugin-s3/releases) 下载最新的 JAR 文件。
|
|
||||||
2. 在 Halo 后台的插件管理上传 JAR 文件进行安装。
|
|
||||||
3. 进入后台附件管理。
|
|
||||||
4. 点击右上角的存储策略,在存储策略弹框的右上角可新建 S3 Object Storage 存储策略。
|
|
||||||
5. 创建完成之后即可在上传的时候选择新创建的 S3 Object Storage 存储策略。
|
|
||||||
|
23
build.gradle
23
build.gradle
@@ -1,5 +1,6 @@
|
|||||||
plugins {
|
plugins {
|
||||||
id "io.github.guqing.plugin-development" version "0.0.6-SNAPSHOT"
|
id "io.github.guqing.plugin-development" version "0.0.7-SNAPSHOT"
|
||||||
|
id "io.freefair.lombok" version "8.0.0-rc2"
|
||||||
id 'java'
|
id 'java'
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -8,7 +9,7 @@ sourceCompatibility = JavaVersion.VERSION_17
|
|||||||
|
|
||||||
repositories {
|
repositories {
|
||||||
maven { url 'https://s01.oss.sonatype.org/content/repositories/releases' }
|
maven { url 'https://s01.oss.sonatype.org/content/repositories/releases' }
|
||||||
maven { url 'https://repo.spring.io/milestone' }
|
maven { url 'https://s01.oss.sonatype.org/content/repositories/snapshots/' }
|
||||||
mavenCentral()
|
mavenCentral()
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -23,24 +24,18 @@ jar {
|
|||||||
}
|
}
|
||||||
|
|
||||||
dependencies {
|
dependencies {
|
||||||
compileOnly platform("run.halo.dependencies:halo-dependencies:1.0.0")
|
implementation platform('run.halo.tools.platform:plugin:2.5.0-SNAPSHOT')
|
||||||
|
compileOnly 'run.halo.app:api'
|
||||||
|
|
||||||
compileOnly files("lib/halo-2.0.0-SNAPSHOT-plain.jar")
|
implementation platform('software.amazon.awssdk:bom:2.19.8')
|
||||||
|
implementation 'software.amazon.awssdk:s3'
|
||||||
implementation platform('com.amazonaws:aws-java-sdk-bom:1.12.360')
|
|
||||||
implementation 'com.amazonaws:aws-java-sdk-s3'
|
|
||||||
implementation "javax.xml.bind:jaxb-api:2.3.1"
|
implementation "javax.xml.bind:jaxb-api:2.3.1"
|
||||||
implementation "javax.activation:activation:1.1.1"
|
implementation "javax.activation:activation:1.1.1"
|
||||||
implementation "org.glassfish.jaxb:jaxb-runtime:2.3.3"
|
implementation "org.glassfish.jaxb:jaxb-runtime:2.3.3"
|
||||||
|
|
||||||
compileOnly 'org.projectlombok:lombok'
|
testImplementation 'run.halo.app:api'
|
||||||
annotationProcessor 'org.projectlombok:lombok:1.18.22'
|
|
||||||
|
|
||||||
testImplementation platform("run.halo.dependencies:halo-dependencies:1.0.0")
|
|
||||||
testImplementation files("lib/halo-2.0.0-SNAPSHOT-plain.jar")
|
|
||||||
testImplementation 'org.springframework.boot:spring-boot-starter-test'
|
testImplementation 'org.springframework.boot:spring-boot-starter-test'
|
||||||
testImplementation 'org.junit.jupiter:junit-jupiter-api:5.9.0'
|
testImplementation 'io.projectreactor:reactor-test'
|
||||||
testRuntimeOnly 'org.junit.jupiter:junit-jupiter-engine:5.9.0'
|
|
||||||
}
|
}
|
||||||
|
|
||||||
test {
|
test {
|
||||||
|
@@ -1 +1 @@
|
|||||||
version=1.2.0-SNAPSHOT
|
version=1.4.1-SNAPSHOT
|
||||||
|
2
gradle/wrapper/gradle-wrapper.properties
vendored
2
gradle/wrapper/gradle-wrapper.properties
vendored
@@ -1,5 +1,5 @@
|
|||||||
distributionBase=GRADLE_USER_HOME
|
distributionBase=GRADLE_USER_HOME
|
||||||
distributionPath=wrapper/dists
|
distributionPath=wrapper/dists
|
||||||
distributionUrl=https\://services.gradle.org/distributions/gradle-7.4-bin.zip
|
distributionUrl=https\://services.gradle.org/distributions/gradle-8.0.2-bin.zip
|
||||||
zipStoreBase=GRADLE_USER_HOME
|
zipStoreBase=GRADLE_USER_HOME
|
||||||
zipStorePath=wrapper/dists
|
zipStorePath=wrapper/dists
|
||||||
|
Binary file not shown.
44
src/main/java/run/halo/s3os/FileNameUtils.java
Normal file
44
src/main/java/run/halo/s3os/FileNameUtils.java
Normal file
@@ -0,0 +1,44 @@
|
|||||||
|
package run.halo.s3os;
|
||||||
|
|
||||||
|
import com.google.common.io.Files;
|
||||||
|
import org.apache.commons.lang3.RandomStringUtils;
|
||||||
|
import org.apache.commons.lang3.StringUtils;
|
||||||
|
|
||||||
|
public final class FileNameUtils {
|
||||||
|
|
||||||
|
private FileNameUtils() {
|
||||||
|
}
|
||||||
|
|
||||||
|
public static String removeFileExtension(String filename, boolean removeAllExtensions) {
|
||||||
|
if (filename == null || filename.isEmpty()) {
|
||||||
|
return filename;
|
||||||
|
}
|
||||||
|
var extPattern = "(?<!^)[.]" + (removeAllExtensions ? ".*" : "[^.]*$");
|
||||||
|
return filename.replaceAll(extPattern, "");
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Append random string after file name.
|
||||||
|
* <pre>
|
||||||
|
* Case 1: halo.run -> halo-xyz.run
|
||||||
|
* Case 2: .run -> xyz.run
|
||||||
|
* Case 3: halo -> halo-xyz
|
||||||
|
* </pre>
|
||||||
|
*
|
||||||
|
* @param filename is name of file.
|
||||||
|
* @param length is for generating random string with specific length.
|
||||||
|
* @return File name with random string.
|
||||||
|
*/
|
||||||
|
public static String randomFileName(String filename, int length) {
|
||||||
|
var nameWithoutExt = Files.getNameWithoutExtension(filename);
|
||||||
|
var ext = Files.getFileExtension(filename);
|
||||||
|
var random = RandomStringUtils.randomAlphabetic(length).toLowerCase();
|
||||||
|
if (StringUtils.isBlank(nameWithoutExt)) {
|
||||||
|
return random + "." + ext;
|
||||||
|
}
|
||||||
|
if (StringUtils.isBlank(ext)) {
|
||||||
|
return nameWithoutExt + "-" + random;
|
||||||
|
}
|
||||||
|
return nameWithoutExt + "-" + random + "." + ext;
|
||||||
|
}
|
||||||
|
}
|
@@ -1,25 +1,35 @@
|
|||||||
package run.halo.s3os;
|
package run.halo.s3os;
|
||||||
|
|
||||||
import com.amazonaws.AmazonServiceException;
|
import java.net.URI;
|
||||||
import com.amazonaws.SdkClientException;
|
import java.net.URISyntaxException;
|
||||||
import com.amazonaws.auth.AWSStaticCredentialsProvider;
|
import java.nio.ByteBuffer;
|
||||||
import com.amazonaws.auth.BasicAWSCredentials;
|
import java.nio.charset.StandardCharsets;
|
||||||
import com.amazonaws.client.builder.AwsClientBuilder;
|
import java.nio.file.FileAlreadyExistsException;
|
||||||
import com.amazonaws.services.s3.AmazonS3;
|
import java.time.Duration;
|
||||||
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
|
import java.util.HashMap;
|
||||||
import com.amazonaws.services.s3.model.ObjectMetadata;
|
import java.util.List;
|
||||||
import com.amazonaws.services.s3.model.PutObjectRequest;
|
import java.util.Map;
|
||||||
import com.amazonaws.services.s3.model.PutObjectResult;
|
import java.util.UUID;
|
||||||
|
import java.util.concurrent.ConcurrentHashMap;
|
||||||
import lombok.extern.slf4j.Slf4j;
|
import lombok.extern.slf4j.Slf4j;
|
||||||
import org.apache.commons.lang3.StringUtils;
|
import org.apache.commons.lang3.StringUtils;
|
||||||
import org.pf4j.Extension;
|
import org.pf4j.Extension;
|
||||||
|
import org.reactivestreams.Publisher;
|
||||||
|
import org.springframework.core.io.buffer.DataBuffer;
|
||||||
import org.springframework.core.io.buffer.DataBufferUtils;
|
import org.springframework.core.io.buffer.DataBufferUtils;
|
||||||
|
import org.springframework.core.io.buffer.DefaultDataBufferFactory;
|
||||||
import org.springframework.http.MediaType;
|
import org.springframework.http.MediaType;
|
||||||
import org.springframework.http.MediaTypeFactory;
|
import org.springframework.http.MediaTypeFactory;
|
||||||
|
import org.springframework.lang.Nullable;
|
||||||
|
import org.springframework.web.server.ServerErrorException;
|
||||||
|
import org.springframework.web.server.ServerWebInputException;
|
||||||
import org.springframework.web.util.UriUtils;
|
import org.springframework.web.util.UriUtils;
|
||||||
import reactor.core.Exceptions;
|
import reactor.core.Exceptions;
|
||||||
|
import reactor.core.publisher.Flux;
|
||||||
import reactor.core.publisher.Mono;
|
import reactor.core.publisher.Mono;
|
||||||
import reactor.core.scheduler.Schedulers;
|
import reactor.core.scheduler.Schedulers;
|
||||||
|
import reactor.util.context.Context;
|
||||||
|
import reactor.util.retry.Retry;
|
||||||
import run.halo.app.core.extension.attachment.Attachment;
|
import run.halo.app.core.extension.attachment.Attachment;
|
||||||
import run.halo.app.core.extension.attachment.Attachment.AttachmentSpec;
|
import run.halo.app.core.extension.attachment.Attachment.AttachmentSpec;
|
||||||
import run.halo.app.core.extension.attachment.Constant;
|
import run.halo.app.core.extension.attachment.Constant;
|
||||||
@@ -28,76 +38,131 @@ import run.halo.app.core.extension.attachment.endpoint.AttachmentHandler;
|
|||||||
import run.halo.app.extension.ConfigMap;
|
import run.halo.app.extension.ConfigMap;
|
||||||
import run.halo.app.extension.Metadata;
|
import run.halo.app.extension.Metadata;
|
||||||
import run.halo.app.infra.utils.JsonUtils;
|
import run.halo.app.infra.utils.JsonUtils;
|
||||||
|
import software.amazon.awssdk.auth.credentials.AwsBasicCredentials;
|
||||||
import java.io.IOException;
|
import software.amazon.awssdk.awscore.presigner.SdkPresigner;
|
||||||
import java.io.PipedInputStream;
|
import software.amazon.awssdk.core.SdkResponse;
|
||||||
import java.io.PipedOutputStream;
|
import software.amazon.awssdk.core.sync.RequestBody;
|
||||||
import java.nio.charset.StandardCharsets;
|
import software.amazon.awssdk.http.SdkHttpResponse;
|
||||||
import java.util.Map;
|
import software.amazon.awssdk.regions.Region;
|
||||||
import java.util.UUID;
|
import software.amazon.awssdk.services.s3.S3Client;
|
||||||
import java.util.function.Supplier;
|
import software.amazon.awssdk.services.s3.S3Configuration;
|
||||||
|
import software.amazon.awssdk.services.s3.model.CompleteMultipartUploadRequest;
|
||||||
|
import software.amazon.awssdk.services.s3.model.CompletedMultipartUpload;
|
||||||
|
import software.amazon.awssdk.services.s3.model.CompletedPart;
|
||||||
|
import software.amazon.awssdk.services.s3.model.CreateMultipartUploadRequest;
|
||||||
|
import software.amazon.awssdk.services.s3.model.DeleteObjectRequest;
|
||||||
|
import software.amazon.awssdk.services.s3.model.GetObjectRequest;
|
||||||
|
import software.amazon.awssdk.services.s3.model.HeadObjectRequest;
|
||||||
|
import software.amazon.awssdk.services.s3.model.HeadObjectResponse;
|
||||||
|
import software.amazon.awssdk.services.s3.model.NoSuchKeyException;
|
||||||
|
import software.amazon.awssdk.services.s3.model.UploadPartRequest;
|
||||||
|
import software.amazon.awssdk.services.s3.presigner.S3Presigner;
|
||||||
|
import software.amazon.awssdk.services.s3.presigner.model.GetObjectPresignRequest;
|
||||||
|
import software.amazon.awssdk.utils.SdkAutoCloseable;
|
||||||
|
|
||||||
@Slf4j
|
@Slf4j
|
||||||
@Extension
|
@Extension
|
||||||
public class S3OsAttachmentHandler implements AttachmentHandler {
|
public class S3OsAttachmentHandler implements AttachmentHandler {
|
||||||
|
|
||||||
private static final String OBJECT_KEY = "s3os.plugin.halo.run/object-key";
|
private static final String OBJECT_KEY = "s3os.plugin.halo.run/object-key";
|
||||||
|
private static final int MULTIPART_MIN_PART_SIZE = 5 * 1024 * 1024;
|
||||||
|
private final Map<String, Object> uploadingFile = new ConcurrentHashMap<>();
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public Mono<Attachment> upload(UploadContext uploadContext) {
|
public Mono<Attachment> upload(UploadContext uploadContext) {
|
||||||
return Mono.just(uploadContext).filter(context -> this.shouldHandle(context.policy()))
|
return Mono.just(uploadContext).filter(context -> this.shouldHandle(context.policy()))
|
||||||
.flatMap(context -> {
|
.flatMap(context -> {
|
||||||
final var properties = getProperties(context.configMap());
|
final var properties = getProperties(context.configMap());
|
||||||
return upload(context, properties).map(
|
return upload(context, properties)
|
||||||
objectDetail -> this.buildAttachment(context, properties, objectDetail));
|
.subscribeOn(Schedulers.boundedElastic())
|
||||||
|
.map(objectDetail -> this.buildAttachment(properties, objectDetail));
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public Mono<Attachment> delete(DeleteContext deleteContext) {
|
public Mono<Attachment> delete(DeleteContext deleteContext) {
|
||||||
return Mono.just(deleteContext).filter(context -> this.shouldHandle(context.policy()))
|
return Mono.just(deleteContext).filter(context -> this.shouldHandle(context.policy()))
|
||||||
.doOnNext(context -> {
|
.flatMap(context -> {
|
||||||
var annotations = context.attachment().getMetadata().getAnnotations();
|
var objectKey = getObjectKey(context.attachment());
|
||||||
if (annotations == null || !annotations.containsKey(OBJECT_KEY)) {
|
if (objectKey == null) {
|
||||||
return;
|
return Mono.just(context);
|
||||||
}
|
}
|
||||||
var objectName = annotations.get(OBJECT_KEY);
|
|
||||||
var properties = getProperties(deleteContext.configMap());
|
var properties = getProperties(deleteContext.configMap());
|
||||||
var client = buildOsClient(properties);
|
return Mono.using(() -> buildS3Client(properties),
|
||||||
ossExecute(() -> {
|
client -> Mono.fromCallable(
|
||||||
log.info("{}/{} is being deleted from S3ObjectStorage", properties.getBucket(),
|
() -> client.deleteObject(DeleteObjectRequest.builder()
|
||||||
objectName);
|
.bucket(properties.getBucket())
|
||||||
client.deleteObject(properties.getBucket(), objectName);
|
.key(objectKey)
|
||||||
log.info("{}/{} was deleted successfully from S3ObjectStorage", properties.getBucket(),
|
.build())).subscribeOn(Schedulers.boundedElastic()),
|
||||||
objectName);
|
S3Client::close)
|
||||||
return null;
|
.doOnNext(response -> {
|
||||||
}, client::shutdown);
|
checkResult(response, "delete object");
|
||||||
}).map(DeleteContext::attachment);
|
log.info("Delete object {} from bucket {} successfully",
|
||||||
|
objectKey, properties.getBucket());
|
||||||
|
})
|
||||||
|
.thenReturn(context);
|
||||||
|
})
|
||||||
|
.map(DeleteContext::attachment);
|
||||||
}
|
}
|
||||||
|
|
||||||
<T> T ossExecute(Supplier<T> runnable, Runnable finalizer) {
|
@Override
|
||||||
|
public Mono<URI> getSharedURL(Attachment attachment, Policy policy, ConfigMap configMap,
|
||||||
|
Duration ttl) {
|
||||||
|
if (!this.shouldHandle(policy)) {
|
||||||
|
return Mono.empty();
|
||||||
|
}
|
||||||
|
var objectKey = getObjectKey(attachment);
|
||||||
|
if (objectKey == null) {
|
||||||
|
return Mono.error(new IllegalArgumentException(
|
||||||
|
"Cannot obtain object key from attachment " + attachment.getMetadata().getName()));
|
||||||
|
}
|
||||||
|
var properties = getProperties(configMap);
|
||||||
|
|
||||||
|
return Mono.using(() -> buildS3Presigner(properties),
|
||||||
|
s3Presigner -> {
|
||||||
|
var getObjectRequest = GetObjectRequest.builder()
|
||||||
|
.bucket(properties.getBucket())
|
||||||
|
.key(objectKey)
|
||||||
|
.build();
|
||||||
|
var presignedRequest = GetObjectPresignRequest.builder()
|
||||||
|
.signatureDuration(ttl)
|
||||||
|
.getObjectRequest(getObjectRequest)
|
||||||
|
.build();
|
||||||
|
var presignedGetObjectRequest = s3Presigner.presignGetObject(presignedRequest);
|
||||||
|
var presignedURL = presignedGetObjectRequest.url();
|
||||||
try {
|
try {
|
||||||
return runnable.get();
|
return Mono.just(presignedURL.toURI());
|
||||||
} catch (AmazonServiceException ase) {
|
} catch (URISyntaxException e) {
|
||||||
log.error("""
|
return Mono.error(
|
||||||
Caught an AmazonServiceException, which means your request made it to S3ObjectStorage, but was
|
new RuntimeException("Failed to convert URL " + presignedURL + " to URI."));
|
||||||
rejected with an error response for some reason.
|
|
||||||
Error message: {}
|
|
||||||
""", ase.getMessage());
|
|
||||||
throw Exceptions.propagate(ase);
|
|
||||||
} catch (SdkClientException sce) {
|
|
||||||
log.error("""
|
|
||||||
Caught an SdkClientException, which means the client encountered a serious internal
|
|
||||||
problem while trying to communicate with S3ObjectStorage, such as not being able to access
|
|
||||||
the network.
|
|
||||||
Error message: {}
|
|
||||||
""", sce.getMessage());
|
|
||||||
throw Exceptions.propagate(sce);
|
|
||||||
} finally {
|
|
||||||
if (finalizer != null) {
|
|
||||||
finalizer.run();
|
|
||||||
}
|
}
|
||||||
|
},
|
||||||
|
SdkPresigner::close)
|
||||||
|
.subscribeOn(Schedulers.boundedElastic());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Mono<URI> getPermalink(Attachment attachment, Policy policy, ConfigMap configMap) {
|
||||||
|
if (!this.shouldHandle(policy)) {
|
||||||
|
return Mono.empty();
|
||||||
|
}
|
||||||
|
var objectKey = getObjectKey(attachment);
|
||||||
|
if (objectKey == null) {
|
||||||
|
// fallback to default handler for backward compatibility
|
||||||
|
return Mono.empty();
|
||||||
|
}
|
||||||
|
var properties = getProperties(configMap);
|
||||||
|
var objectURL = getObjectURL(properties, objectKey);
|
||||||
|
return Mono.just(URI.create(objectURL));
|
||||||
|
}
|
||||||
|
|
||||||
|
@Nullable
|
||||||
|
private String getObjectKey(Attachment attachment) {
|
||||||
|
var annotations = attachment.getMetadata().getAnnotations();
|
||||||
|
if (annotations == null) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
return annotations.get(OBJECT_KEY);
|
||||||
}
|
}
|
||||||
|
|
||||||
S3OsProperties getProperties(ConfigMap configMap) {
|
S3OsProperties getProperties(ConfigMap configMap) {
|
||||||
@@ -105,94 +170,278 @@ public class S3OsAttachmentHandler implements AttachmentHandler {
|
|||||||
return JsonUtils.jsonToObject(settingJson, S3OsProperties.class);
|
return JsonUtils.jsonToObject(settingJson, S3OsProperties.class);
|
||||||
}
|
}
|
||||||
|
|
||||||
Attachment buildAttachment(UploadContext uploadContext, S3OsProperties properties,
|
Attachment buildAttachment(S3OsProperties properties, ObjectDetail objectDetail) {
|
||||||
ObjectDetail objectDetail) {
|
String externalLink = getObjectURL(properties, objectDetail.uploadState.objectKey);
|
||||||
String externalLink;
|
|
||||||
if (StringUtils.isBlank(properties.getDomain())) {
|
|
||||||
var host = properties.getBucket() + "." + properties.getEndpoint();
|
|
||||||
externalLink = properties.getProtocol() + "://" + host + "/" + objectDetail.objectName();
|
|
||||||
} else {
|
|
||||||
externalLink = properties.getProtocol() + "://" + properties.getDomain() + "/" + objectDetail.objectName();
|
|
||||||
}
|
|
||||||
|
|
||||||
var metadata = new Metadata();
|
var metadata = new Metadata();
|
||||||
metadata.setName(UUID.randomUUID().toString());
|
metadata.setName(UUID.randomUUID().toString());
|
||||||
metadata.setAnnotations(
|
metadata.setAnnotations(new HashMap<>(
|
||||||
Map.of(OBJECT_KEY, objectDetail.objectName(), Constant.EXTERNAL_LINK_ANNO_KEY,
|
Map.of(OBJECT_KEY, objectDetail.uploadState.objectKey,
|
||||||
UriUtils.encodePath(externalLink, StandardCharsets.UTF_8)));
|
Constant.EXTERNAL_LINK_ANNO_KEY, externalLink)));
|
||||||
|
|
||||||
var objectMetadata = objectDetail.objectMetadata();
|
var objectMetadata = objectDetail.objectMetadata();
|
||||||
var spec = new AttachmentSpec();
|
var spec = new AttachmentSpec();
|
||||||
spec.setSize(objectMetadata.getContentLength());
|
spec.setSize(objectMetadata.contentLength());
|
||||||
spec.setDisplayName(uploadContext.file().filename());
|
spec.setDisplayName(objectDetail.uploadState.fileName);
|
||||||
spec.setMediaType(objectMetadata.getContentType());
|
spec.setMediaType(objectMetadata.contentType());
|
||||||
|
|
||||||
var attachment = new Attachment();
|
var attachment = new Attachment();
|
||||||
attachment.setMetadata(metadata);
|
attachment.setMetadata(metadata);
|
||||||
attachment.setSpec(spec);
|
attachment.setSpec(spec);
|
||||||
|
log.info("Upload object {} to bucket {} successfully", objectDetail.uploadState.objectKey,
|
||||||
|
properties.getBucket());
|
||||||
return attachment;
|
return attachment;
|
||||||
}
|
}
|
||||||
|
|
||||||
AmazonS3 buildOsClient(S3OsProperties properties) {
|
private String getObjectURL(S3OsProperties properties, String objectKey) {
|
||||||
return AmazonS3ClientBuilder.standard()
|
String objectURL;
|
||||||
.withCredentials(new AWSStaticCredentialsProvider(
|
if (StringUtils.isBlank(properties.getDomain())) {
|
||||||
new BasicAWSCredentials(properties.getAccessKey(), properties.getAccessSecret())))
|
String host;
|
||||||
.withEndpointConfiguration(
|
if (properties.getEnablePathStyleAccess()) {
|
||||||
new AwsClientBuilder.EndpointConfiguration(
|
host = properties.getEndpoint() + "/" + properties.getBucket();
|
||||||
properties.getEndpointProtocol() + "://" + properties.getEndpoint(),
|
} else {
|
||||||
properties.getRegion()))
|
host = properties.getBucket() + "." + properties.getEndpoint();
|
||||||
.withPathStyleAccessEnabled(false)
|
}
|
||||||
.withChunkedEncodingDisabled(true)
|
objectURL = properties.getProtocol() + "://" + host + "/" + objectKey;
|
||||||
|
} else {
|
||||||
|
objectURL = properties.getProtocol() + "://" + properties.getDomain() + "/" + objectKey;
|
||||||
|
}
|
||||||
|
return UriUtils.encodePath(objectURL, StandardCharsets.UTF_8);
|
||||||
|
}
|
||||||
|
|
||||||
|
S3Client buildS3Client(S3OsProperties properties) {
|
||||||
|
return S3Client.builder()
|
||||||
|
.region(Region.of(properties.getRegion()))
|
||||||
|
.endpointOverride(
|
||||||
|
URI.create(properties.getEndpointProtocol() + "://" + properties.getEndpoint()))
|
||||||
|
.credentialsProvider(() -> AwsBasicCredentials.create(properties.getAccessKey(),
|
||||||
|
properties.getAccessSecret()))
|
||||||
|
.serviceConfiguration(S3Configuration.builder()
|
||||||
|
.chunkedEncodingEnabled(false)
|
||||||
|
.pathStyleAccessEnabled(properties.getEnablePathStyleAccess())
|
||||||
|
.build())
|
||||||
.build();
|
.build();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private S3Presigner buildS3Presigner(S3OsProperties properties) {
|
||||||
|
return S3Presigner.builder()
|
||||||
|
.region(Region.of(properties.getRegion()))
|
||||||
|
.endpointOverride(
|
||||||
|
URI.create(properties.getEndpointProtocol() + "://" + properties.getEndpoint()))
|
||||||
|
.credentialsProvider(() -> AwsBasicCredentials.create(properties.getAccessKey(),
|
||||||
|
properties.getAccessSecret()))
|
||||||
|
.serviceConfiguration(S3Configuration.builder()
|
||||||
|
.chunkedEncodingEnabled(false)
|
||||||
|
.pathStyleAccessEnabled(properties.getEnablePathStyleAccess())
|
||||||
|
.build())
|
||||||
|
.build();
|
||||||
|
}
|
||||||
|
|
||||||
|
Flux<DataBuffer> reshape(Publisher<DataBuffer> content, int bufferSize) {
|
||||||
|
var dataBufferFactory = DefaultDataBufferFactory.sharedInstance;
|
||||||
|
return Flux.<ByteBuffer>create(sink -> {
|
||||||
|
var byteBuffer = ByteBuffer.allocate(bufferSize);
|
||||||
|
Flux.from(content)
|
||||||
|
.doOnNext(dataBuffer -> {
|
||||||
|
var count = dataBuffer.readableByteCount();
|
||||||
|
for (var i = 0; i < count; i++) {
|
||||||
|
byteBuffer.put(dataBuffer.read());
|
||||||
|
// Emit the buffer when buffer
|
||||||
|
if (!byteBuffer.hasRemaining()) {
|
||||||
|
sink.next(deepCopy(byteBuffer));
|
||||||
|
byteBuffer.clear();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.doOnComplete(() -> {
|
||||||
|
// Emit the last part of buffer.
|
||||||
|
if (byteBuffer.position() > 0) {
|
||||||
|
sink.next(deepCopy(byteBuffer));
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.subscribe(DataBufferUtils::release, sink::error, sink::complete,
|
||||||
|
Context.of(sink.contextView()));
|
||||||
|
})
|
||||||
|
.map(dataBufferFactory::wrap)
|
||||||
|
.cast(DataBuffer.class)
|
||||||
|
.doOnDiscard(DataBuffer.class, DataBufferUtils::release);
|
||||||
|
}
|
||||||
|
|
||||||
|
ByteBuffer deepCopy(ByteBuffer src) {
|
||||||
|
src.flip();
|
||||||
|
var dest = ByteBuffer.allocate(src.limit());
|
||||||
|
dest.put(src);
|
||||||
|
src.rewind();
|
||||||
|
dest.flip();
|
||||||
|
return dest;
|
||||||
|
}
|
||||||
|
|
||||||
Mono<ObjectDetail> upload(UploadContext uploadContext, S3OsProperties properties) {
|
Mono<ObjectDetail> upload(UploadContext uploadContext, S3OsProperties properties) {
|
||||||
return Mono.fromCallable(() -> {
|
return Mono.using(() -> buildS3Client(properties),
|
||||||
var client = buildOsClient(properties);
|
client -> {
|
||||||
// build object name
|
var uploadState = new UploadState(properties, uploadContext.file().filename());
|
||||||
var originFilename = uploadContext.file().filename();
|
|
||||||
var objectName = properties.getObjectName(originFilename);
|
|
||||||
|
|
||||||
var pos = new PipedOutputStream();
|
var content = uploadContext.file().content();
|
||||||
var pis = new PipedInputStream(pos);
|
|
||||||
DataBufferUtils.write(uploadContext.file().content(), pos)
|
return checkFileExistsAndRename(uploadState, client)
|
||||||
.subscribeOn(Schedulers.boundedElastic()).doOnComplete(() -> {
|
// init multipart upload
|
||||||
try {
|
.flatMap(state -> Mono.fromCallable(() -> client.createMultipartUpload(
|
||||||
pos.close();
|
CreateMultipartUploadRequest.builder()
|
||||||
} catch (IOException ioe) {
|
.bucket(properties.getBucket())
|
||||||
// close the stream quietly
|
.contentType(state.contentType)
|
||||||
log.warn("Failed to close output stream", ioe);
|
.key(state.objectKey)
|
||||||
|
.build())))
|
||||||
|
.doOnNext((response) -> {
|
||||||
|
checkResult(response, "createMultipartUpload");
|
||||||
|
uploadState.uploadId = response.uploadId();
|
||||||
|
})
|
||||||
|
.thenMany(reshape(content, MULTIPART_MIN_PART_SIZE))
|
||||||
|
// buffer to part
|
||||||
|
.windowUntil((buffer) -> {
|
||||||
|
uploadState.buffered += buffer.readableByteCount();
|
||||||
|
if (uploadState.buffered >= MULTIPART_MIN_PART_SIZE) {
|
||||||
|
uploadState.buffered = 0;
|
||||||
|
return true;
|
||||||
|
} else {
|
||||||
|
return false;
|
||||||
}
|
}
|
||||||
}).subscribe(DataBufferUtils.releaseConsumer());
|
})
|
||||||
|
// upload part
|
||||||
final var bucket = properties.getBucket();
|
.concatMap((window) -> window.collectList().flatMap((bufferList) -> {
|
||||||
var metadata = new ObjectMetadata();
|
var buffer = S3OsAttachmentHandler.concatBuffers(bufferList);
|
||||||
var contentType = MediaTypeFactory.getMediaType(originFilename)
|
return uploadPart(uploadState, buffer, client);
|
||||||
.orElse(MediaType.APPLICATION_OCTET_STREAM).toString();
|
}))
|
||||||
metadata.setContentType(contentType);
|
.reduce(uploadState, (state, completedPart) -> {
|
||||||
var request = new PutObjectRequest(bucket, objectName, pis, metadata);
|
state.completedParts.put(completedPart.partNumber(), completedPart);
|
||||||
log.info("Uploading {} into S3ObjectStorage {}/{}/{}", originFilename,
|
return state;
|
||||||
properties.getEndpoint(), bucket, objectName);
|
})
|
||||||
|
// complete multipart upload
|
||||||
return ossExecute(() -> {
|
.flatMap((state) -> Mono.just(client.completeMultipartUpload(
|
||||||
var result = client.putObject(request);
|
CompleteMultipartUploadRequest
|
||||||
if (log.isDebugEnabled()) {
|
.builder()
|
||||||
debug(result);
|
.bucket(properties.getBucket())
|
||||||
|
.uploadId(state.uploadId)
|
||||||
|
.multipartUpload(CompletedMultipartUpload.builder()
|
||||||
|
.parts(state.completedParts.values())
|
||||||
|
.build())
|
||||||
|
.key(state.objectKey)
|
||||||
|
.build())
|
||||||
|
))
|
||||||
|
// get object metadata
|
||||||
|
.flatMap((response) -> {
|
||||||
|
checkResult(response, "completeUpload");
|
||||||
|
return Mono.just(client.headObject(
|
||||||
|
HeadObjectRequest.builder()
|
||||||
|
.bucket(properties.getBucket())
|
||||||
|
.key(uploadState.objectKey)
|
||||||
|
.build()
|
||||||
|
));
|
||||||
|
})
|
||||||
|
// build object detail
|
||||||
|
.map((response) -> {
|
||||||
|
checkResult(response, "getMetadata");
|
||||||
|
return new ObjectDetail(uploadState, response);
|
||||||
|
})
|
||||||
|
// close client
|
||||||
|
.doFinally((signalType) -> {
|
||||||
|
if (uploadState.needRemoveMapKey) {
|
||||||
|
uploadingFile.remove(uploadState.getUploadingMapKey());
|
||||||
}
|
}
|
||||||
var objectMetadata = client.getObjectMetadata(bucket, objectName);
|
});
|
||||||
return new ObjectDetail(bucket, objectName, objectMetadata);
|
},
|
||||||
}, client::shutdown);
|
SdkAutoCloseable::close);
|
||||||
}).subscribeOn(Schedulers.boundedElastic());
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void debug(PutObjectResult result) {
|
private Mono<UploadState> checkFileExistsAndRename(UploadState uploadState,
|
||||||
log.debug("""
|
S3Client s3client) {
|
||||||
PutObjectResult: VersionId: {}, ETag: {}, ContentMd5: {}, ExpirationTime: {}, ExpirationTimeRuleId: {},
|
return Mono.defer(() -> {
|
||||||
response RawMetadata: {}, UserMetadata: {}
|
// deduplication of uploading files
|
||||||
""", result.getVersionId(), result.getETag(), result.getContentMd5(), result.getExpirationTime(),
|
if (uploadingFile.put(uploadState.getUploadingMapKey(),
|
||||||
result.getExpirationTimeRuleId(), result.getMetadata().getRawMetadata(),
|
uploadState.getUploadingMapKey()) != null) {
|
||||||
result.getMetadata().getUserMetadata());
|
return Mono.error(new FileAlreadyExistsException("文件 " + uploadState.objectKey
|
||||||
|
+
|
||||||
|
" 已存在,建议更名后重试。[local]"));
|
||||||
}
|
}
|
||||||
|
uploadState.needRemoveMapKey = true;
|
||||||
|
// check whether file exists
|
||||||
|
return Mono.fromSupplier(() -> s3client.headObject(HeadObjectRequest.builder()
|
||||||
|
.bucket(uploadState.properties.getBucket())
|
||||||
|
.key(uploadState.objectKey)
|
||||||
|
.build()))
|
||||||
|
.onErrorResume(NoSuchKeyException.class, e -> {
|
||||||
|
var builder = HeadObjectResponse.builder();
|
||||||
|
builder.sdkHttpResponse(SdkHttpResponse.builder().statusCode(404).build());
|
||||||
|
return Mono.just(builder.build());
|
||||||
|
})
|
||||||
|
.flatMap(response -> {
|
||||||
|
if (response != null && response.sdkHttpResponse() != null
|
||||||
|
&& response.sdkHttpResponse().isSuccessful()) {
|
||||||
|
return Mono.error(
|
||||||
|
new FileAlreadyExistsException("文件 " + uploadState.objectKey
|
||||||
|
+ " 已存在,建议更名后重试。[remote]"));
|
||||||
|
} else {
|
||||||
|
return Mono.just(uploadState);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
})
|
||||||
|
.retryWhen(Retry.max(3)
|
||||||
|
.filter(FileAlreadyExistsException.class::isInstance)
|
||||||
|
.doAfterRetry((retrySignal) -> {
|
||||||
|
if (uploadState.needRemoveMapKey) {
|
||||||
|
uploadingFile.remove(uploadState.getUploadingMapKey());
|
||||||
|
uploadState.needRemoveMapKey = false;
|
||||||
|
}
|
||||||
|
uploadState.randomFileName();
|
||||||
|
})
|
||||||
|
)
|
||||||
|
.onErrorMap(Exceptions::isRetryExhausted,
|
||||||
|
throwable -> new ServerWebInputException(throwable.getCause().getMessage()));
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
private Mono<CompletedPart> uploadPart(UploadState uploadState, ByteBuffer buffer,
|
||||||
|
S3Client s3client) {
|
||||||
|
final int partNumber = ++uploadState.partCounter;
|
||||||
|
return Mono.just(s3client.uploadPart(UploadPartRequest.builder()
|
||||||
|
.bucket(uploadState.properties.getBucket())
|
||||||
|
.key(uploadState.objectKey)
|
||||||
|
.partNumber(partNumber)
|
||||||
|
.uploadId(uploadState.uploadId)
|
||||||
|
.contentLength((long) buffer.capacity())
|
||||||
|
.build(),
|
||||||
|
RequestBody.fromByteBuffer(buffer)))
|
||||||
|
.map((uploadPartResult) -> {
|
||||||
|
checkResult(uploadPartResult, "uploadPart");
|
||||||
|
return CompletedPart.builder()
|
||||||
|
.eTag(uploadPartResult.eTag())
|
||||||
|
.partNumber(partNumber)
|
||||||
|
.build();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
private static void checkResult(SdkResponse result, String operation) {
|
||||||
|
log.info("operation: {}, result: {}", operation, result);
|
||||||
|
if (result.sdkHttpResponse() == null || !result.sdkHttpResponse().isSuccessful()) {
|
||||||
|
log.error("Failed to upload object, response: {}", result.sdkHttpResponse());
|
||||||
|
throw new ServerErrorException("对象存储响应错误,无法将对象上传到S3对象存储", null);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private static ByteBuffer concatBuffers(List<DataBuffer> buffers) {
|
||||||
|
int partSize = 0;
|
||||||
|
for (DataBuffer b : buffers) {
|
||||||
|
partSize += b.readableByteCount();
|
||||||
|
}
|
||||||
|
|
||||||
|
ByteBuffer partData = ByteBuffer.allocate(partSize);
|
||||||
|
buffers.forEach((buffer) -> partData.put(buffer.toByteBuffer()));
|
||||||
|
|
||||||
|
// Reset read pointer to first byte
|
||||||
|
partData.rewind();
|
||||||
|
|
||||||
|
return partData;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
boolean shouldHandle(Policy policy) {
|
boolean shouldHandle(Policy policy) {
|
||||||
if (policy == null || policy.getSpec() == null ||
|
if (policy == null || policy.getSpec() == null ||
|
||||||
@@ -203,7 +452,38 @@ public class S3OsAttachmentHandler implements AttachmentHandler {
|
|||||||
return "s3os".equals(templateName);
|
return "s3os".equals(templateName);
|
||||||
}
|
}
|
||||||
|
|
||||||
record ObjectDetail(String bucketName, String objectName, ObjectMetadata objectMetadata) {
|
record ObjectDetail(UploadState uploadState, HeadObjectResponse objectMetadata) {
|
||||||
|
}
|
||||||
|
|
||||||
|
static class UploadState {
|
||||||
|
final S3OsProperties properties;
|
||||||
|
final String originalFileName;
|
||||||
|
String uploadId;
|
||||||
|
int partCounter;
|
||||||
|
Map<Integer, CompletedPart> completedParts = new HashMap<>();
|
||||||
|
int buffered = 0;
|
||||||
|
String contentType;
|
||||||
|
String fileName;
|
||||||
|
String objectKey;
|
||||||
|
boolean needRemoveMapKey = false;
|
||||||
|
|
||||||
|
public UploadState(S3OsProperties properties, String fileName) {
|
||||||
|
this.properties = properties;
|
||||||
|
this.originalFileName = fileName;
|
||||||
|
this.fileName = fileName;
|
||||||
|
this.objectKey = properties.getObjectName(fileName);
|
||||||
|
this.contentType = MediaTypeFactory.getMediaType(fileName)
|
||||||
|
.orElse(MediaType.APPLICATION_OCTET_STREAM).toString();
|
||||||
|
}
|
||||||
|
|
||||||
|
public String getUploadingMapKey() {
|
||||||
|
return properties.getBucket() + "/" + objectKey;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void randomFileName() {
|
||||||
|
this.fileName = FileNameUtils.randomFileName(originalFileName, 4);
|
||||||
|
this.objectKey = properties.getObjectName(fileName);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@@ -10,6 +10,8 @@ class S3OsProperties {
|
|||||||
|
|
||||||
private Protocol endpointProtocol = Protocol.https;
|
private Protocol endpointProtocol = Protocol.https;
|
||||||
|
|
||||||
|
private Boolean enablePathStyleAccess = false;
|
||||||
|
|
||||||
private String endpoint;
|
private String endpoint;
|
||||||
|
|
||||||
private String accessKey;
|
private String accessKey;
|
||||||
|
@@ -19,7 +19,7 @@ spec:
|
|||||||
label: Bucket 桶名称
|
label: Bucket 桶名称
|
||||||
validation: required
|
validation: required
|
||||||
- $formkit: select
|
- $formkit: select
|
||||||
name: endpoint_protocol
|
name: endpointProtocol
|
||||||
label: Endpoint 访问协议
|
label: Endpoint 访问协议
|
||||||
options:
|
options:
|
||||||
- label: HTTPS
|
- label: HTTPS
|
||||||
@@ -27,9 +27,20 @@ spec:
|
|||||||
- label: HTTP
|
- label: HTTP
|
||||||
value: http
|
value: http
|
||||||
validation: required
|
validation: required
|
||||||
|
- $formkit: select
|
||||||
|
name: enablePathStyleAccess
|
||||||
|
label: Endpoint 访问风格
|
||||||
|
options:
|
||||||
|
- label: Virtual Hosted Style
|
||||||
|
value: false
|
||||||
|
- label: Path Style
|
||||||
|
value: true
|
||||||
|
value: false
|
||||||
|
validation: required
|
||||||
- $formkit: text
|
- $formkit: text
|
||||||
name: endpoint
|
name: endpoint
|
||||||
label: EndPoint
|
label: EndPoint
|
||||||
|
placeholder: 请填写不带bucket-name的Endpoint
|
||||||
validation: required
|
validation: required
|
||||||
help: 协议头请在上方设置,此处无需以"http://"或"https://"开头,系统会自动拼接
|
help: 协议头请在上方设置,此处无需以"http://"或"https://"开头,系统会自动拼接
|
||||||
- $formkit: password
|
- $formkit: password
|
||||||
|
@@ -4,8 +4,8 @@ metadata:
|
|||||||
name: PluginS3ObjectStorage
|
name: PluginS3ObjectStorage
|
||||||
spec:
|
spec:
|
||||||
enabled: true
|
enabled: true
|
||||||
version: 1.2.0
|
version: 1.4.1
|
||||||
requires: ">=2.0.0"
|
requires: ">=2.5.0"
|
||||||
author:
|
author:
|
||||||
name: longjuan
|
name: longjuan
|
||||||
website: https://github.com/longjuan
|
website: https://github.com/longjuan
|
||||||
@@ -16,4 +16,4 @@ spec:
|
|||||||
displayName: "对象存储(Amazon S3 协议)"
|
displayName: "对象存储(Amazon S3 协议)"
|
||||||
description: "提供兼容 Amazon S3 协议的对象存储策略,兼容阿里云、腾讯云、七牛云等"
|
description: "提供兼容 Amazon S3 协议的对象存储策略,兼容阿里云、腾讯云、七牛云等"
|
||||||
license:
|
license:
|
||||||
- name: "MIT"
|
- name: "GPL-3.0"
|
||||||
|
@@ -1,12 +1,19 @@
|
|||||||
package run.halo.s3os;
|
package run.halo.s3os;
|
||||||
|
|
||||||
|
import static java.nio.charset.StandardCharsets.UTF_8;
|
||||||
|
import static org.junit.jupiter.api.Assertions.assertEquals;
|
||||||
import static org.junit.jupiter.api.Assertions.assertFalse;
|
import static org.junit.jupiter.api.Assertions.assertFalse;
|
||||||
import static org.junit.jupiter.api.Assertions.assertTrue;
|
import static org.junit.jupiter.api.Assertions.assertTrue;
|
||||||
import static org.mockito.Mockito.mock;
|
import static org.mockito.Mockito.mock;
|
||||||
import static org.mockito.Mockito.when;
|
import static org.mockito.Mockito.when;
|
||||||
|
|
||||||
|
import java.util.List;
|
||||||
import org.junit.jupiter.api.BeforeEach;
|
import org.junit.jupiter.api.BeforeEach;
|
||||||
import org.junit.jupiter.api.Test;
|
import org.junit.jupiter.api.Test;
|
||||||
|
import org.springframework.core.io.buffer.DataBuffer;
|
||||||
|
import org.springframework.core.io.buffer.DefaultDataBufferFactory;
|
||||||
|
import reactor.core.publisher.Flux;
|
||||||
|
import reactor.test.StepVerifier;
|
||||||
import run.halo.app.core.extension.attachment.Policy;
|
import run.halo.app.core.extension.attachment.Policy;
|
||||||
|
|
||||||
class S3OsAttachmentHandlerTest {
|
class S3OsAttachmentHandlerTest {
|
||||||
@@ -33,4 +40,57 @@ class S3OsAttachmentHandlerTest {
|
|||||||
// policy is null
|
// policy is null
|
||||||
assertFalse(handler.shouldHandle(null));
|
assertFalse(handler.shouldHandle(null));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void reshapeDataBufferWithSmallerBufferSize() {
|
||||||
|
var handler = new S3OsAttachmentHandler();
|
||||||
|
var factory = DefaultDataBufferFactory.sharedInstance;
|
||||||
|
var content = Flux.<DataBuffer>fromIterable(List.of(factory.wrap("halo".getBytes())));
|
||||||
|
|
||||||
|
StepVerifier.create(handler.reshape(content, 2))
|
||||||
|
.assertNext(dataBuffer -> {
|
||||||
|
var str = dataBuffer.toString(UTF_8);
|
||||||
|
assertEquals("ha", str);
|
||||||
|
})
|
||||||
|
.assertNext(dataBuffer -> {
|
||||||
|
var str = dataBuffer.toString(UTF_8);
|
||||||
|
assertEquals("lo", str);
|
||||||
|
})
|
||||||
|
.verifyComplete();
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void reshapeDataBufferWithBiggerBufferSize() {
|
||||||
|
var handler = new S3OsAttachmentHandler();
|
||||||
|
var factory = DefaultDataBufferFactory.sharedInstance;
|
||||||
|
var content = Flux.<DataBuffer>fromIterable(List.of(factory.wrap("halo".getBytes())));
|
||||||
|
|
||||||
|
StepVerifier.create(handler.reshape(content, 10))
|
||||||
|
.assertNext(dataBuffer -> {
|
||||||
|
var str = dataBuffer.toString(UTF_8);
|
||||||
|
assertEquals("halo", str);
|
||||||
|
})
|
||||||
|
.verifyComplete();
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void reshapeDataBuffersWithBiggerBufferSize() {
|
||||||
|
var handler = new S3OsAttachmentHandler();
|
||||||
|
var factory = DefaultDataBufferFactory.sharedInstance;
|
||||||
|
var content = Flux.<DataBuffer>fromIterable(List.of(
|
||||||
|
factory.wrap("ha".getBytes()),
|
||||||
|
factory.wrap("lo".getBytes())
|
||||||
|
));
|
||||||
|
|
||||||
|
StepVerifier.create(handler.reshape(content, 3))
|
||||||
|
.assertNext(dataBuffer -> {
|
||||||
|
var str = dataBuffer.toString(UTF_8);
|
||||||
|
assertEquals("hal", str);
|
||||||
|
})
|
||||||
|
.assertNext(dataBuffer -> {
|
||||||
|
var str = dataBuffer.toString(UTF_8);
|
||||||
|
assertEquals("o", str);
|
||||||
|
})
|
||||||
|
.verifyComplete();
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
Reference in New Issue
Block a user