Search in sources :

Example 6 with PutObjectRequest

use of com.amazonaws.s3.model.PutObjectRequest in project camel-kafka-connector by apache.

the class S3Utils method sendFilesFromPath.

public static void sendFilesFromPath(S3Client s3Client, String bucketName, File[] files) {
    LOG.debug("Putting S3 objects");
    for (File file : files) {
        LOG.debug("Trying to read file {}", file.getName());
        PutObjectRequest putObjectRequest = PutObjectRequest.builder().bucket(bucketName).key(file.getName()).build();
        s3Client.putObject(putObjectRequest, file.toPath());
    }
}
Also used : File(java.io.File) PutObjectRequest(software.amazon.awssdk.services.s3.model.PutObjectRequest)

Example 7 with PutObjectRequest

use of com.amazonaws.s3.model.PutObjectRequest in project BridgeServer2 by Sage-Bionetworks.

the class StudyConsentService method writeBytesToPublicS3.

/**
 * Write the byte array to a bucket at S3. The bucket will be given world read privileges, and the request
 * will be returned with the appropriate content type header for the document's MimeType.
 */
void writeBytesToPublicS3(@Nonnull String bucket, @Nonnull String key, @Nonnull byte[] data, @Nonnull MimeType type) throws IOException {
    try (InputStream dataInputStream = new ByteArrayInputStream(data)) {
        ObjectMetadata metadata = new ObjectMetadata();
        metadata.setContentType(type.toString());
        PutObjectRequest request = new PutObjectRequest(bucket, key, dataInputStream, metadata).withCannedAcl(CannedAccessControlList.PublicRead);
        Stopwatch stopwatch = Stopwatch.createStarted();
        s3Client.putObject(request);
        logger.info("Finished writing to bucket " + bucket + " key " + key + " (" + data.length + " bytes) in " + stopwatch.elapsed(TimeUnit.MILLISECONDS) + " ms");
    }
}
Also used : ByteArrayInputStream(java.io.ByteArrayInputStream) ByteArrayInputStream(java.io.ByteArrayInputStream) InputStream(java.io.InputStream) Stopwatch(com.google.common.base.Stopwatch) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest)

Example 8 with PutObjectRequest

use of com.amazonaws.s3.model.PutObjectRequest in project BridgeServer2 by Sage-Bionetworks.

the class StudyConsentServiceTest method publishConsent.

@Test
public void publishConsent() throws Exception {
    // This is the document with the footer, where all the template variables have been removed
    String transformedDoc = "<doc><p>Document</p><table class=\"bridge-sig-block\"><tbody><tr>" + "<td><div class=\"label\">Name of Adult Participant</div></td><td><img brimg=\"\" " + "alt=\"\" onerror=\"this.style.display='none'\" src=\"cid:consentSignature\" /><div " + "class=\"label\">Signature of Adult Participant</div></td><td><div class=\"label\">" + "Date</div></td></tr><tr><td><div class=\"label\">Email, Phone, or ID</div></td><td>" + "<div class=\"label\">Sharing Option</div></td></tr></tbody></table></doc>";
    StudyConsent consent = StudyConsent.create();
    consent.setCreatedOn(CREATED_ON);
    consent.setSubpopulationGuid(SUBPOP_GUID.getGuid());
    when(mockDao.getConsent(SUBPOP_GUID, CREATED_ON)).thenReturn(consent);
    when(mockS3Helper.readS3FileAsString(CONSENT_BUCKET, consent.getStoragePath())).thenReturn(DOCUMENT);
    App app = App.create();
    Subpopulation subpop = Subpopulation.create();
    subpop.setGuid(SUBPOP_GUID);
    service.setConsentTemplate(new ByteArrayResource("<doc>${consent.body}</doc>".getBytes()));
    StudyConsentView result = service.publishConsent(app, subpop, CREATED_ON);
    assertEquals(result.getDocumentContent(), DOCUMENT + SIGNATURE_BLOCK);
    assertEquals(result.getCreatedOn(), CREATED_ON);
    assertEquals(result.getSubpopulationGuid(), SUBPOP_GUID.getGuid());
    assertEquals(result.getStudyConsent(), consent);
    verify(mockSubpopService).updateSubpopulation(eq(app), subpopCaptor.capture());
    assertEquals(subpopCaptor.getValue().getPublishedConsentCreatedOn(), CREATED_ON);
    verify(mockS3Client, times(2)).putObject(requestCaptor.capture());
    PutObjectRequest request = requestCaptor.getAllValues().get(0);
    assertEquals(request.getBucketName(), PUBLICATION_BUCKET);
    assertEquals(request.getCannedAcl(), PublicRead);
    assertEquals(IOUtils.toString(request.getInputStream(), UTF_8), transformedDoc);
    ObjectMetadata metadata = request.getMetadata();
    assertEquals(metadata.getContentType(), MimeType.HTML.toString());
    request = requestCaptor.getAllValues().get(1);
    assertEquals(request.getBucketName(), PUBLICATION_BUCKET);
    assertEquals(request.getCannedAcl(), PublicRead);
    // The PDF output isn't easily verified...
    metadata = request.getMetadata();
    assertEquals(metadata.getContentType(), MimeType.PDF.toString());
}
Also used : App(org.sagebionetworks.bridge.models.apps.App) StudyConsent(org.sagebionetworks.bridge.models.subpopulations.StudyConsent) Subpopulation(org.sagebionetworks.bridge.models.subpopulations.Subpopulation) StudyConsentView(org.sagebionetworks.bridge.models.subpopulations.StudyConsentView) ByteArrayResource(org.springframework.core.io.ByteArrayResource) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest) Test(org.testng.annotations.Test)

Example 9 with PutObjectRequest

use of com.amazonaws.s3.model.PutObjectRequest in project para by Erudika.

the class AWSFileStore method store.

@Override
public String store(String path, InputStream data) {
    if (StringUtils.startsWith(path, "/")) {
        path = path.substring(1);
    }
    if (StringUtils.isBlank(path) || data == null) {
        return null;
    }
    int maxFileSizeMBytes = Para.getConfig().getConfigInt("para.s3.max_filesize_mb", 10);
    try {
        Map<String, String> om = new HashMap<String, String>(3);
        // 180 days
        om.put(HttpHeaders.CACHE_CONTROL, "max-age=15552000, must-revalidate");
        if (path.endsWith(".gz")) {
            om.put(HttpHeaders.CONTENT_ENCODING, "gzip");
            path = path.substring(0, path.length() - 3);
        }
        PutObjectRequest por = PutObjectRequest.builder().bucket(bucket).key(path).metadata(om).acl(ObjectCannedACL.PUBLIC_READ).build();
        try (ByteArrayOutputStream baos = new ByteArrayOutputStream()) {
            byte[] buf = new byte[1024];
            int length;
            while ((length = data.read(buf)) > 0) {
                baos.write(buf, 0, length);
                if (baos.size() > (maxFileSizeMBytes * 1024 * 1024)) {
                    logger.warn("Failed to store file on S3 because it's too large - {}, {} bytes", path, baos.size());
                    return null;
                }
            }
            s3.putObject(por, RequestBody.fromBytes(baos.toByteArray()));
        }
        final String key = path;
        return s3.utilities().getUrl(b -> b.bucket(bucket).key(key)).toExternalForm();
    } catch (IOException e) {
        logger.error(null, e);
    } finally {
        try {
            data.close();
        } catch (IOException ex) {
            logger.error(null, ex);
        }
    }
    return null;
}
Also used : ObjectCannedACL(software.amazon.awssdk.services.s3.model.ObjectCannedACL) Logger(org.slf4j.Logger) ByteArrayOutputStream(java.io.ByteArrayOutputStream) S3Client(software.amazon.awssdk.services.s3.S3Client) LoggerFactory(org.slf4j.LoggerFactory) IOException(java.io.IOException) HashMap(java.util.HashMap) StringUtils(org.apache.commons.lang3.StringUtils) HttpHeaders(javax.ws.rs.core.HttpHeaders) Map(java.util.Map) RequestBody(software.amazon.awssdk.core.sync.RequestBody) PutObjectRequest(software.amazon.awssdk.services.s3.model.PutObjectRequest) FileStore(com.erudika.para.core.storage.FileStore) Para(com.erudika.para.core.utils.Para) DefaultAwsRegionProviderChain(software.amazon.awssdk.regions.providers.DefaultAwsRegionProviderChain) InputStream(java.io.InputStream) HashMap(java.util.HashMap) ByteArrayOutputStream(java.io.ByteArrayOutputStream) IOException(java.io.IOException) PutObjectRequest(software.amazon.awssdk.services.s3.model.PutObjectRequest)

Example 10 with PutObjectRequest

use of com.amazonaws.s3.model.PutObjectRequest in project firehose by odpf.

the class S3Test method shouldThrowException.

@Test
public void shouldThrowException() throws IOException {
    S3Config s3Config = ConfigFactory.create(S3Config.class, new HashMap<Object, Object>() {

        {
            put("S3_TYPE", "SOME_TYPE");
            put("SOME_TYPE_S3_BUCKET_NAME", "TestBucket");
            put("SOME_TYPE_S3_REGION", "asia");
        }
    });
    S3Client s3Client = Mockito.mock(S3Client.class);
    S3 s3Storage = new S3(s3Config, s3Client);
    PutObjectRequest putObject = PutObjectRequest.builder().bucket("TestBucket").key("test").build();
    byte[] content = "test".getBytes();
    SdkClientException exception = SdkClientException.create("test");
    Mockito.when(s3Client.putObject(Mockito.any(PutObjectRequest.class), Mockito.any(RequestBody.class))).thenThrow(exception);
    BlobStorageException thrown = Assertions.assertThrows(BlobStorageException.class, () -> s3Storage.store("test", content), "BlobStorageException error was expected");
    ArgumentCaptor<PutObjectRequest> putObjectRequestArgumentCaptor = ArgumentCaptor.forClass(PutObjectRequest.class);
    ArgumentCaptor<RequestBody> requestBodyArgumentCaptor = ArgumentCaptor.forClass(RequestBody.class);
    Mockito.verify(s3Client, Mockito.times(1)).putObject(putObjectRequestArgumentCaptor.capture(), requestBodyArgumentCaptor.capture());
    Assert.assertEquals(putObject, putObjectRequestArgumentCaptor.getValue());
    byte[] expectedBytes = new byte[4];
    byte[] actualBytes = new byte[4];
    RequestBody.fromBytes(content).contentStreamProvider().newStream().read(expectedBytes);
    requestBodyArgumentCaptor.getValue().contentStreamProvider().newStream().read(actualBytes);
    Assert.assertEquals(new String(expectedBytes), new String(actualBytes));
    Assertions.assertEquals(new BlobStorageException("test", "test", exception), thrown);
}
Also used : S3(io.odpf.firehose.sink.common.blobstorage.s3.S3) S3Config(io.odpf.firehose.config.S3Config) SdkClientException(software.amazon.awssdk.core.exception.SdkClientException) S3Client(software.amazon.awssdk.services.s3.S3Client) BlobStorageException(io.odpf.firehose.sink.common.blobstorage.BlobStorageException) PutObjectRequest(software.amazon.awssdk.services.s3.model.PutObjectRequest) RequestBody(software.amazon.awssdk.core.sync.RequestBody) Test(org.junit.Test)

Aggregations

PutObjectRequest (com.amazonaws.services.s3.model.PutObjectRequest)301 ObjectMetadata (com.amazonaws.services.s3.model.ObjectMetadata)193 ByteArrayInputStream (java.io.ByteArrayInputStream)113 Test (org.junit.Test)113 PutObjectRequest (software.amazon.awssdk.services.s3.model.PutObjectRequest)78 File (java.io.File)68 IOException (java.io.IOException)65 InputStream (java.io.InputStream)55 S3FileTransferRequestParamsDto (org.finra.herd.model.dto.S3FileTransferRequestParamsDto)42 AmazonClientException (com.amazonaws.AmazonClientException)40 PutObjectResult (com.amazonaws.services.s3.model.PutObjectResult)39 Upload (com.amazonaws.services.s3.transfer.Upload)37 AmazonServiceException (com.amazonaws.AmazonServiceException)35 Test (org.junit.jupiter.api.Test)30 AmazonS3 (com.amazonaws.services.s3.AmazonS3)28 AmazonS3Exception (com.amazonaws.services.s3.model.AmazonS3Exception)20 Date (java.util.Date)20 BusinessObjectDataKey (org.finra.herd.model.api.xml.BusinessObjectDataKey)20 StorageUnitEntity (org.finra.herd.model.jpa.StorageUnitEntity)20 RequestBody (software.amazon.awssdk.core.sync.RequestBody)19