Search in sources :

Example 36 with UploadPartResult

use of com.amazonaws.services.s3.model.UploadPartResult in project aws-sdk-android by aws-amplify.

the class AmazonS3EncryptionClient method uploadObject.

/**
 * Used to encrypt data first to disk with pipelined concurrent multi-part
 * uploads to S3. This method enables significant speed-up of encrypting and
 * uploading large payloads to Amazon S3 via pipelining and parallel uploads
 * by consuming temporary disk space.
 * <p>
 * There are many ways you can customize the behavior of this method,
 * including
 * <ul>
 * <li>the configuration of your own custom thread pool</li>
 * <li>the part size of each multi-part upload request; By default, a
 * temporary ciphertext file is generated per part and gets uploaded
 * immediately to S3</li>
 * <li>the maximum temporary disk space that must not be exceeded by
 * execution of this request; By default, the encryption will block upon
 * hitting the limit and will only resume when the in-flight uploads catch
 * up by releasing the temporary disk space upon successful uploads of the
 * completed parts</li>
 * <li>the configuration of your own {@link MultiFileOutputStream} for
 * custom pipeline behavior</li>
 * <li>the configuration of your own {@link UploadObjectObserver} for custom
 * multi-part upload behavior</li>
 * </ul>
 * <p>
 * A request is handled with the following life cycle, calling the necessary
 * Service Provider Interface:
 * <ol>
 * <li>A thread pool is constructed (or retrieved from the request) for the
 * execution of concurrent upload tasks to be submitted by the
 * <code>UploadObjectObserver</code></li>
 * <li>An {@link UploadObjectObserver} is constructed (or retrieved from the
 * request) for execution of concurrent uploads to S3</li>
 * <li>Initialize the <code>UploadObjectObserver</code></li>
 * <li>Initialize a multi-part upload request to S3 by calling
 * {@link UploadObjectObserver#onUploadInitiation(UploadObjectRequest)}</li>
 * <li>A {@link MultiFileOutputStream} is constructed (or retrieved from the
 * request) which serves as the pipeline for incremental (but serial)
 * encryption to disk with concurrent multipart uploads to S3 whenever the
 * parts on the disk are ready</li>
 * <li>Initialize the <code>MultiFileOutputStream</code></li>
 * <li>Kicks off the pipeline for incremental encryption to disk with
 * pipelined concurrent multi-part uploads to S3</li>
 * <li>For every part encrypted into a temporary file on disk, it is
 * uploaded by calling
 * {@link UploadObjectObserver#onPartCreate(PartCreationEvent)}</li>
 * <li>Finally, clean up and complete the multi-part upload by calling
 * {@link UploadObjectObserver#onCompletion(List)}.</li>
 * </ol>
 *
 * @return the result of the completed muti-part uploads
 *
 * @throws IOException
 *             if the encryption to disk failed
 * @throws InterruptedException
 *             if the current thread was interrupted while waiting
 * @throws ExecutionException
 *             if the concurrent uploads threw an exception
 */
public CompleteMultipartUploadResult uploadObject(final UploadObjectRequest req) throws IOException, InterruptedException, ExecutionException {
    // Set up the pipeline for concurrent encrypt and upload
    // Set up a thread pool for this pipeline
    ExecutorService es = req.getExecutorService();
    final boolean defaultExecutorService = es == null;
    if (es == null)
        es = Executors.newFixedThreadPool(clientConfiguration.getMaxConnections());
    UploadObjectObserver observer = req.getUploadObjectObserver();
    if (observer == null)
        observer = new UploadObjectObserver();
    // initialize the observer
    observer.init(req, new S3DirectImpl(), this, es);
    // Initiate upload
    final String uploadId = observer.onUploadInitiation(req);
    final List<PartETag> partETags = new ArrayList<PartETag>();
    MultiFileOutputStream mfos = req.getMultiFileOutputStream();
    if (mfos == null)
        mfos = new MultiFileOutputStream();
    try {
        // initialize the multi-file output stream
        mfos.init(observer, req.getPartSize(), req.getDiskLimit());
        // Kicks off the encryption-upload pipeline;
        // Note mfos is automatically closed upon method completion.
        crypto.putLocalObjectSecurely(req, uploadId, mfos);
        // block till all part have been uploaded
        for (Future<UploadPartResult> future : observer.getFutures()) {
            UploadPartResult partResult = future.get();
            partETags.add(new PartETag(partResult.getPartNumber(), partResult.getETag()));
        }
    } catch (IOException ex) {
        throw onAbort(observer, ex);
    } catch (InterruptedException ex) {
        throw onAbort(observer, ex);
    } catch (ExecutionException ex) {
        throw onAbort(observer, ex);
    } catch (RuntimeException ex) {
        throw onAbort(observer, ex);
    } catch (Error ex) {
        throw onAbort(observer, ex);
    } finally {
        if (defaultExecutorService)
            // shut down the locally created thread pool
            es.shutdownNow();
        // delete left-over temp files
        mfos.cleanup();
    }
    // Complete upload
    return observer.onCompletion(partETags);
}
Also used : ArrayList(java.util.ArrayList) IOException(java.io.IOException) PartETag(com.amazonaws.services.s3.model.PartETag) UploadPartResult(com.amazonaws.services.s3.model.UploadPartResult) ExecutorService(java.util.concurrent.ExecutorService) MultiFileOutputStream(com.amazonaws.services.s3.internal.MultiFileOutputStream) ExecutionException(java.util.concurrent.ExecutionException)

Example 37 with UploadPartResult

use of com.amazonaws.services.s3.model.UploadPartResult in project aws-sdk-android by aws-amplify.

the class UploadPartTask method call.

/*
     * Runs part upload task and returns whether successfully uploaded.
     */
@Override
public Boolean call() throws Exception {
    uploadPartTaskMetadata.state = TransferState.IN_PROGRESS;
    uploadPartRequest.setGeneralProgressListener(uploadPartTaskProgressListener);
    int retried = 1;
    while (true) {
        try {
            final UploadPartResult putPartResult = s3.uploadPart(uploadPartRequest);
            setTaskState(TransferState.PART_COMPLETED);
            dbUtil.updateETag(uploadPartRequest.getId(), putPartResult.getETag());
            return true;
        } catch (AbortedException e) {
            // If request got aborted, operation was paused or canceled. do not retry.
            LOGGER.debug("Upload part aborted.");
            resetProgress();
            return false;
        } catch (final Exception e) {
            LOGGER.error("Unexpected error occurred: " + e);
            resetProgress();
            // Check if network is not connected, set the state to WAITING_FOR_NETWORK.
            try {
                if (TransferNetworkLossHandler.getInstance() != null && !TransferNetworkLossHandler.getInstance().isNetworkConnected()) {
                    LOGGER.info("Thread: [" + Thread.currentThread().getId() + "]: Network wasn't available.");
                    /*
                         * Network connection is being interrupted. Moving the TransferState
                         * to WAITING_FOR_NETWORK till the network availability resumes.
                         */
                    uploadPartTaskMetadata.state = TransferState.WAITING_FOR_NETWORK;
                    dbUtil.updateState(uploadPartRequest.getId(), TransferState.WAITING_FOR_NETWORK);
                    LOGGER.info("Network Connection Interrupted: " + "Moving the TransferState to WAITING_FOR_NETWORK");
                    return false;
                }
            } catch (TransferUtilityException transferUtilityException) {
                LOGGER.error("TransferUtilityException: [" + transferUtilityException + "]");
            }
            if (retried >= RETRY_COUNT) {
                setTaskState(TransferState.FAILED);
                LOGGER.error("Encountered error uploading part ", e);
                throw e;
            }
            // Sleep before retrying
            long delayMs = exponentialBackoffWithJitter(retried);
            LOGGER.info("Retrying in " + delayMs + " ms.");
            TimeUnit.MILLISECONDS.sleep(delayMs);
            LOGGER.debug("Retry attempt: " + retried++, e);
        }
    }
}
Also used : UploadPartResult(com.amazonaws.services.s3.model.UploadPartResult) AbortedException(com.amazonaws.AbortedException) AbortedException(com.amazonaws.AbortedException)

Example 38 with UploadPartResult

use of com.amazonaws.services.s3.model.UploadPartResult in project proxima-platform by O2-Czech-Republic.

the class S3Client method putObject.

/**
 * Put object to s3 using multi-part upload.
 *
 * @param blobName Name of the blob we want to write.
 * @return Output stream that we can write data into.
 */
public OutputStream putObject(String blobName) {
    Preconditions.checkState(!client().doesObjectExist(bucket, blobName), "Object already exists.");
    final String currentBucket = getBucket();
    InitiateMultipartUploadRequest request = new InitiateMultipartUploadRequest(currentBucket, blobName);
    if (sseCustomerKey != null) {
        request.setSSECustomerKey(sseCustomerKey);
    }
    final String uploadId = client().initiateMultipartUpload(request).getUploadId();
    final List<PartETag> eTags = new ArrayList<>();
    final byte[] partBuffer = new byte[UPLOAD_PART_SIZE];
    return new OutputStream() {

        /**
         * Signalizes whether this output stream is closed.
         */
        private boolean closed = false;

        /**
         * Number of un-flushed bytes in current part buffer.
         */
        private int currentBytes = 0;

        /**
         * Part number of current part in multi-part upload. Indexing from 1.
         */
        private int partNumber = 1;

        @Override
        public void write(int b) throws IOException {
            Preconditions.checkState(!closed, "Output stream already closed.");
            // Number of bytes written is also position of next write.
            partBuffer[currentBytes] = (byte) b;
            currentBytes++;
            if (currentBytes >= UPLOAD_PART_SIZE) {
                flush();
            }
        }

        @Override
        public void flush() throws IOException {
            Preconditions.checkState(!closed, "Output stream already closed.");
            if (currentBytes > 0) {
                try (final InputStream is = new ByteArrayInputStream(partBuffer, 0, currentBytes)) {
                    final UploadPartRequest uploadPartRequest = new UploadPartRequest().withBucketName(currentBucket).withKey(blobName).withUploadId(uploadId).withPartNumber(partNumber).withInputStream(is).withPartSize(currentBytes);
                    if (sseCustomerKey != null) {
                        uploadPartRequest.setSSECustomerKey(sseCustomerKey);
                    }
                    final UploadPartResult uploadPartResult = client().uploadPart(uploadPartRequest);
                    eTags.add(uploadPartResult.getPartETag());
                    partNumber++;
                }
            }
            currentBytes = 0;
        }

        @Override
        public void close() throws IOException {
            if (!closed) {
                flush();
                client().completeMultipartUpload(new CompleteMultipartUploadRequest(currentBucket, blobName, uploadId, eTags));
                closed = true;
            }
        }
    };
}
Also used : UploadPartResult(com.amazonaws.services.s3.model.UploadPartResult) ByteArrayInputStream(java.io.ByteArrayInputStream) ByteArrayInputStream(java.io.ByteArrayInputStream) InputStream(java.io.InputStream) OutputStream(java.io.OutputStream) InitiateMultipartUploadRequest(com.amazonaws.services.s3.model.InitiateMultipartUploadRequest) ArrayList(java.util.ArrayList) UploadPartRequest(com.amazonaws.services.s3.model.UploadPartRequest) PartETag(com.amazonaws.services.s3.model.PartETag) CompleteMultipartUploadRequest(com.amazonaws.services.s3.model.CompleteMultipartUploadRequest)

Example 39 with UploadPartResult

use of com.amazonaws.services.s3.model.UploadPartResult in project proxima-platform by O2-Czech-Republic.

the class S3FileSystemTest method setUp.

@Before
public void setUp() {
    Map<String, Blob> blobs = new HashMap<>();
    S3Accessor accessor = new S3Accessor(TestUtils.createTestFamily(gateway, URI.create("s3://bucket/path"), cfg()));
    fs = new S3FileSystem(accessor, direct.getContext()) {

        @Override
        AmazonS3 client() {
            AmazonS3 client = mock(AmazonS3.class);
            when(client.listObjects(any(), any())).thenAnswer(invocationOnMock -> asListing(new ArrayList<>(blobs.values())));
            when(client.initiateMultipartUpload(any())).thenAnswer(invocationOnMock -> {
                final InitiateMultipartUploadRequest req = invocationOnMock.getArgument(0, InitiateMultipartUploadRequest.class);
                assertEquals(SSEC_KEY, req.getSSECustomerKey().getKey());
                String name = req.getKey();
                assertTrue(name.startsWith("path/"));
                blobs.put(name.substring(5), new Blob(name.substring(5)));
                final InitiateMultipartUploadResult result = new InitiateMultipartUploadResult();
                result.setUploadId(UUID.randomUUID().toString());
                return result;
            });
            when(client.uploadPart(any())).thenAnswer(invocationOnMock -> {
                final UploadPartResult result = new UploadPartResult();
                result.setETag(UUID.randomUUID().toString());
                return result;
            });
            return client;
        }
    };
}
Also used : EntityDescriptor(cz.o2.proxima.repository.EntityDescriptor) HashMap(java.util.HashMap) ObjectListing(com.amazonaws.services.s3.model.ObjectListing) ArrayList(java.util.ArrayList) Map(java.util.Map) InitiateMultipartUploadResult(com.amazonaws.services.s3.model.InitiateMultipartUploadResult) ConfigFactory(com.typesafe.config.ConfigFactory) AmazonS3(com.amazonaws.services.s3.AmazonS3) UploadPartResult(com.amazonaws.services.s3.model.UploadPartResult) S3ObjectSummary(com.amazonaws.services.s3.model.S3ObjectSummary) URI(java.net.URI) Path(cz.o2.proxima.direct.bulk.Path) Before(org.junit.Before) OutputStream(java.io.OutputStream) Repository(cz.o2.proxima.repository.Repository) ImmutableMap(com.google.common.collect.ImmutableMap) TestUtils(cz.o2.proxima.util.TestUtils) Assert.assertTrue(org.junit.Assert.assertTrue) IOException(java.io.IOException) Test(org.junit.Test) Mockito.when(org.mockito.Mockito.when) UUID(java.util.UUID) Collectors(java.util.stream.Collectors) List(java.util.List) InitiateMultipartUploadRequest(com.amazonaws.services.s3.model.InitiateMultipartUploadRequest) DirectDataOperator(cz.o2.proxima.direct.core.DirectDataOperator) Mockito.any(org.mockito.Mockito.any) Assert.assertEquals(org.junit.Assert.assertEquals) Mockito.mock(org.mockito.Mockito.mock) AmazonS3(com.amazonaws.services.s3.AmazonS3) InitiateMultipartUploadResult(com.amazonaws.services.s3.model.InitiateMultipartUploadResult) HashMap(java.util.HashMap) InitiateMultipartUploadRequest(com.amazonaws.services.s3.model.InitiateMultipartUploadRequest) UploadPartResult(com.amazonaws.services.s3.model.UploadPartResult) Before(org.junit.Before)

Aggregations

UploadPartResult (com.amazonaws.services.s3.model.UploadPartResult)33 UploadPartRequest (com.amazonaws.services.s3.model.UploadPartRequest)28 InitiateMultipartUploadRequest (com.amazonaws.services.s3.model.InitiateMultipartUploadRequest)22 InitiateMultipartUploadResult (com.amazonaws.services.s3.model.InitiateMultipartUploadResult)19 CompleteMultipartUploadRequest (com.amazonaws.services.s3.model.CompleteMultipartUploadRequest)17 ArrayList (java.util.ArrayList)16 PartETag (com.amazonaws.services.s3.model.PartETag)15 IOException (java.io.IOException)14 Test (org.junit.Test)13 AmazonClientException (com.amazonaws.AmazonClientException)11 CompleteMultipartUploadResult (com.amazonaws.services.s3.model.CompleteMultipartUploadResult)10 ByteArrayInputStream (java.io.ByteArrayInputStream)10 ObjectMetadata (com.amazonaws.services.s3.model.ObjectMetadata)9 AmazonS3 (com.amazonaws.services.s3.AmazonS3)8 InputStream (java.io.InputStream)7 AmazonS3Client (com.amazonaws.services.s3.AmazonS3Client)6 AbortMultipartUploadRequest (com.amazonaws.services.s3.model.AbortMultipartUploadRequest)6 CannedAccessControlList (com.amazonaws.services.s3.model.CannedAccessControlList)5 File (java.io.File)5 HashMap (java.util.HashMap)5