Search in sources :

Example 31 with PutObjectRequest

use of software.amazon.awssdk.services.s3.model.PutObjectRequest in project herd by FINRAOS.

the class S3DaoTest method testGetProperties.

/**
 * The method is successful when both bucket and key exists.
 */
@Test
public void testGetProperties() {
    String expectedKey = "foo";
    String expectedValue = "bar";
    ByteArrayInputStream inputStream = new ByteArrayInputStream((expectedKey + "=" + expectedValue).getBytes());
    PutObjectRequest putObjectRequest = new PutObjectRequest(S3_BUCKET_NAME, TARGET_S3_KEY, inputStream, new ObjectMetadata());
    s3Operations.putObject(putObjectRequest, null);
    Properties properties = s3Dao.getProperties(S3_BUCKET_NAME, TARGET_S3_KEY, s3DaoTestHelper.getTestS3FileTransferRequestParamsDto());
    Assert.assertEquals("properties key '" + expectedKey + "'", expectedValue, properties.get(expectedKey));
}
Also used : ByteArrayInputStream(java.io.ByteArrayInputStream) Properties(java.util.Properties) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest) Test(org.junit.Test)

Example 32 with PutObjectRequest

use of software.amazon.awssdk.services.s3.model.PutObjectRequest in project herd by FINRAOS.

the class S3DaoTest method testValidateGlacierS3FilesRestoredGlacierObjectRestoreInProgress.

@Test
public void testValidateGlacierS3FilesRestoredGlacierObjectRestoreInProgress() {
    // Put a 1 byte Glacier storage class file in S3 that is still being restored (OngoingRestore flag is true).
    ObjectMetadata metadata = new ObjectMetadata();
    metadata.setHeader(Headers.STORAGE_CLASS, StorageClass.Glacier);
    metadata.setOngoingRestore(true);
    s3Operations.putObject(new PutObjectRequest(storageDaoTestHelper.getS3ManagedBucketName(), TARGET_S3_KEY, new ByteArrayInputStream(new byte[1]), metadata), null);
    // Try to validate if the Glacier S3 file is already restored.
    try {
        S3FileTransferRequestParamsDto params = new S3FileTransferRequestParamsDto();
        params.setS3BucketName(storageDaoTestHelper.getS3ManagedBucketName());
        params.setFiles(Arrays.asList(new File(TARGET_S3_KEY)));
        s3Dao.validateGlacierS3FilesRestored(params);
        fail("Should throw an IllegalArgumentException when Glacier S3 file is not restored.");
    } catch (IllegalArgumentException e) {
        assertEquals(String.format("Archived Glacier S3 file \"%s\" is not restored. StorageClass {GLACIER}, OngoingRestore flag {true}, S3 bucket name {%s}", TARGET_S3_KEY, storageDaoTestHelper.getS3ManagedBucketName()), e.getMessage());
    }
}
Also used : S3FileTransferRequestParamsDto(org.finra.herd.model.dto.S3FileTransferRequestParamsDto) ByteArrayInputStream(java.io.ByteArrayInputStream) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata) File(java.io.File) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest) Test(org.junit.Test)

Example 33 with PutObjectRequest

use of software.amazon.awssdk.services.s3.model.PutObjectRequest in project herd by FINRAOS.

the class S3DaoTest method testRestoreObjectsNonGlacierObject.

@Test
public void testRestoreObjectsNonGlacierObject() {
    // Put a 1 byte non-Glacier storage class file in S3.
    ObjectMetadata metadata = new ObjectMetadata();
    metadata.setHeader(Headers.STORAGE_CLASS, StorageClass.Standard);
    metadata.setOngoingRestore(false);
    s3Operations.putObject(new PutObjectRequest(storageDaoTestHelper.getS3ManagedBucketName(), TARGET_S3_KEY, new ByteArrayInputStream(new byte[1]), metadata), null);
    // Try to initiate a restore request for a non-Glacier file.
    try {
        S3FileTransferRequestParamsDto params = new S3FileTransferRequestParamsDto();
        params.setS3BucketName(storageDaoTestHelper.getS3ManagedBucketName());
        params.setFiles(Arrays.asList(new File(TARGET_S3_KEY)));
        s3Dao.restoreObjects(params, S3_RESTORE_OBJECT_EXPIRATION_IN_DAYS);
        fail("Should throw an IllegalStateException when file has a non-Glacier storage class.");
    } catch (IllegalStateException e) {
        assertEquals(String.format("Failed to initiate a restore request for \"%s\" key in \"%s\" bucket. " + "Reason: object is not in Glacier (Service: null; Status Code: 0; Error Code: null; Request ID: null)", TARGET_S3_KEY, storageDaoTestHelper.getS3ManagedBucketName()), e.getMessage());
    }
}
Also used : S3FileTransferRequestParamsDto(org.finra.herd.model.dto.S3FileTransferRequestParamsDto) ByteArrayInputStream(java.io.ByteArrayInputStream) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata) File(java.io.File) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest) Test(org.junit.Test)

Example 34 with PutObjectRequest

use of software.amazon.awssdk.services.s3.model.PutObjectRequest in project herd by FINRAOS.

the class S3DaoTest method testValidateS3File.

@Test
public void testValidateS3File() throws IOException, InterruptedException {
    // Put a 1 KB file in S3.
    PutObjectRequest putObjectRequest = new PutObjectRequest(storageDaoTestHelper.getS3ManagedBucketName(), TARGET_S3_KEY, new ByteArrayInputStream(new byte[(int) FILE_SIZE_1_KB]), null);
    s3Operations.putObject(putObjectRequest, null);
    // Validate the S3 file.
    S3FileTransferRequestParamsDto s3FileTransferRequestParamsDto = s3DaoTestHelper.getTestS3FileTransferRequestParamsDto();
    s3FileTransferRequestParamsDto.setS3KeyPrefix(TARGET_S3_KEY);
    s3Dao.validateS3File(s3FileTransferRequestParamsDto, FILE_SIZE_1_KB);
}
Also used : S3FileTransferRequestParamsDto(org.finra.herd.model.dto.S3FileTransferRequestParamsDto) ByteArrayInputStream(java.io.ByteArrayInputStream) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest) Test(org.junit.Test)

Example 35 with PutObjectRequest

use of software.amazon.awssdk.services.s3.model.PutObjectRequest in project herd by FINRAOS.

the class S3DaoTest method testValidateS3FileRuntimeExceptionFileSizeDoesNotMatch.

@Test(expected = IllegalArgumentException.class)
public void testValidateS3FileRuntimeExceptionFileSizeDoesNotMatch() throws IOException, InterruptedException {
    // Put a 1 KB file in S3.
    PutObjectRequest putObjectRequest = new PutObjectRequest(storageDaoTestHelper.getS3ManagedBucketName(), TARGET_S3_KEY, new ByteArrayInputStream(new byte[(int) FILE_SIZE_1_KB]), null);
    s3Operations.putObject(putObjectRequest, null);
    // Try to validate an S3 file by specifying an incorrect file size.
    S3FileTransferRequestParamsDto s3FileTransferRequestParamsDto = s3DaoTestHelper.getTestS3FileTransferRequestParamsDto();
    s3FileTransferRequestParamsDto.setS3KeyPrefix(TARGET_S3_KEY);
    s3Dao.validateS3File(s3FileTransferRequestParamsDto, FILE_SIZE_1_KB + 999);
}
Also used : S3FileTransferRequestParamsDto(org.finra.herd.model.dto.S3FileTransferRequestParamsDto) ByteArrayInputStream(java.io.ByteArrayInputStream) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest) Test(org.junit.Test)

Aggregations

PutObjectRequest (com.amazonaws.services.s3.model.PutObjectRequest)140 ObjectMetadata (com.amazonaws.services.s3.model.ObjectMetadata)80 ByteArrayInputStream (java.io.ByteArrayInputStream)63 Test (org.junit.Test)61 File (java.io.File)35 IOException (java.io.IOException)32 S3FileTransferRequestParamsDto (org.finra.herd.model.dto.S3FileTransferRequestParamsDto)29 PutObjectResult (com.amazonaws.services.s3.model.PutObjectResult)23 Upload (com.amazonaws.services.s3.transfer.Upload)23 AmazonClientException (com.amazonaws.AmazonClientException)20 InputStream (java.io.InputStream)19 PutObjectRequest (software.amazon.awssdk.services.s3.model.PutObjectRequest)15 BusinessObjectDataKey (org.finra.herd.model.api.xml.BusinessObjectDataKey)14 StorageUnitEntity (org.finra.herd.model.jpa.StorageUnitEntity)14 BusinessObjectDataEntity (org.finra.herd.model.jpa.BusinessObjectDataEntity)13 HashMap (java.util.HashMap)12 S3Exception (software.amazon.awssdk.services.s3.model.S3Exception)10 AmazonServiceException (com.amazonaws.AmazonServiceException)9 AmazonS3 (com.amazonaws.services.s3.AmazonS3)9 Exchange (org.apache.camel.Exchange)8