Search in sources :

Example 31 with Upload

use of com.amazonaws.services.s3.transfer.Upload in project hadoop by apache.

the class S3AFileSystem method putObject.

/**
   * Start a transfer-manager managed async PUT of an object,
   * incrementing the put requests and put bytes
   * counters.
   * It does not update the other counters,
   * as existing code does that as progress callbacks come in.
   * Byte length is calculated from the file length, or, if there is no
   * file, from the content length of the header.
   * Because the operation is async, any stream supplied in the request
   * must reference data (files, buffers) which stay valid until the upload
   * completes.
   * @param putObjectRequest the request
   * @return the upload initiated
   */
public Upload putObject(PutObjectRequest putObjectRequest) {
    long len;
    if (putObjectRequest.getFile() != null) {
        len = putObjectRequest.getFile().length();
    } else {
        len = putObjectRequest.getMetadata().getContentLength();
    }
    incrementPutStartStatistics(len);
    try {
        Upload upload = transfers.upload(putObjectRequest);
        incrementPutCompletedStatistics(true, len);
        return upload;
    } catch (AmazonClientException e) {
        incrementPutCompletedStatistics(false, len);
        throw e;
    }
}
Also used : AmazonClientException(com.amazonaws.AmazonClientException) Upload(com.amazonaws.services.s3.transfer.Upload)

Example 32 with Upload

use of com.amazonaws.services.s3.transfer.Upload in project hadoop by apache.

the class S3AOutputStream method close.

@Override
public void close() throws IOException {
    if (closed.getAndSet(true)) {
        return;
    }
    backupStream.close();
    LOG.debug("OutputStream for key '{}' closed. Now beginning upload", key);
    try {
        final ObjectMetadata om = fs.newObjectMetadata(backupFile.length());
        Upload upload = fs.putObject(fs.newPutObjectRequest(key, om, backupFile));
        ProgressableProgressListener listener = new ProgressableProgressListener(fs, key, upload, progress);
        upload.addProgressListener(listener);
        upload.waitForUploadResult();
        listener.uploadCompleted();
        // This will delete unnecessary fake parent directories
        fs.finishedWrite(key);
    } catch (InterruptedException e) {
        throw (InterruptedIOException) new InterruptedIOException(e.toString()).initCause(e);
    } catch (AmazonClientException e) {
        throw translateException("saving output", key, e);
    } finally {
        if (!backupFile.delete()) {
            LOG.warn("Could not delete temporary s3a file: {}", backupFile);
        }
        super.close();
    }
    LOG.debug("OutputStream for key '{}' upload complete", key);
}
Also used : InterruptedIOException(java.io.InterruptedIOException) AmazonClientException(com.amazonaws.AmazonClientException) Upload(com.amazonaws.services.s3.transfer.Upload) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata)

Example 33 with Upload

use of com.amazonaws.services.s3.transfer.Upload in project hadoop by apache.

the class S3AFileSystem method createEmptyObject.

// Used to create an empty file that represents an empty directory
private void createEmptyObject(final String objectName) throws AmazonClientException, AmazonServiceException, InterruptedIOException {
    final InputStream im = new InputStream() {

        @Override
        public int read() throws IOException {
            return -1;
        }
    };
    PutObjectRequest putObjectRequest = newPutObjectRequest(objectName, newObjectMetadata(0L), im);
    Upload upload = putObject(putObjectRequest);
    try {
        upload.waitForUploadResult();
    } catch (InterruptedException e) {
        throw new InterruptedIOException("Interrupted creating " + objectName);
    }
    incrementPutProgressStatistics(objectName, 0);
    instrumentation.directoryCreated();
}
Also used : InterruptedIOException(java.io.InterruptedIOException) FSDataInputStream(org.apache.hadoop.fs.FSDataInputStream) InputStream(java.io.InputStream) Upload(com.amazonaws.services.s3.transfer.Upload) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest)

Example 34 with Upload

use of com.amazonaws.services.s3.transfer.Upload in project aws-doc-sdk-examples by awsdocs.

the class XferMgrUpload method uploadFile.

public static void uploadFile(String file_path, String bucket_name, String key_prefix, boolean pause) {
    System.out.println("file: " + file_path + (pause ? " (pause)" : ""));
    String key_name = null;
    if (key_prefix != null) {
        key_name = key_prefix + '/' + file_path;
    } else {
        key_name = file_path;
    }
    File f = new File(file_path);
    TransferManager xfer_mgr = new TransferManager();
    try {
        Upload xfer = xfer_mgr.upload(bucket_name, key_name, f);
        // loop with Transfer.isDone()
        XferMgrProgress.showTransferProgress(xfer);
        //  or block with Transfer.waitForCompletion()
        XferMgrProgress.waitForCompletion(xfer);
    } catch (AmazonServiceException e) {
        System.err.println(e.getErrorMessage());
        System.exit(1);
    }
    xfer_mgr.shutdownNow();
}
Also used : TransferManager(com.amazonaws.services.s3.transfer.TransferManager) AmazonServiceException(com.amazonaws.AmazonServiceException) Upload(com.amazonaws.services.s3.transfer.Upload) MultipleFileUpload(com.amazonaws.services.s3.transfer.MultipleFileUpload) File(java.io.File)

Example 35 with Upload

use of com.amazonaws.services.s3.transfer.Upload in project aws-doc-sdk-examples by awsdocs.

the class PutObject method main.

public static void main(String[] args) {
    final String USAGE = "\n" + "To run this example, supply the name of an S3 bucket and a file to\n" + "upload to it.\n" + "\n" + "Ex: PutObject <bucketname> <filename>\n";
    if (args.length < 2) {
        System.out.println(USAGE);
        System.exit(1);
    }
    String bucket_name = args[0];
    String file_path = args[1];
    String key_name = Paths.get(file_path).getFileName().toString();
    System.out.format("Uploading %s to S3 bucket %s...\n", file_path, bucket_name);
    final AmazonS3 s3 = AmazonS3ClientBuilder.defaultClient();
    try {
        s3.putObject(bucket_name, key_name, file_path);
    } catch (AmazonServiceException e) {
        System.err.println(e.getErrorMessage());
        System.exit(1);
    }
    System.out.println("Done!");
}
Also used : AmazonS3(com.amazonaws.services.s3.AmazonS3) AmazonServiceException(com.amazonaws.AmazonServiceException)

Aggregations

PutObjectRequest (com.amazonaws.services.s3.model.PutObjectRequest)19 ObjectMetadata (com.amazonaws.services.s3.model.ObjectMetadata)18 Upload (com.amazonaws.services.s3.transfer.Upload)18 AmazonClientException (com.amazonaws.AmazonClientException)11 IOException (java.io.IOException)11 File (java.io.File)8 DataStoreException (org.apache.jackrabbit.core.data.DataStoreException)7 AmazonServiceException (com.amazonaws.AmazonServiceException)6 PartETag (com.amazonaws.services.s3.model.PartETag)6 InitiateMultipartUploadRequest (com.amazonaws.services.s3.model.InitiateMultipartUploadRequest)5 InitiateMultipartUploadResult (com.amazonaws.services.s3.model.InitiateMultipartUploadResult)5 PutObjectResult (com.amazonaws.services.s3.model.PutObjectResult)5 InputStream (java.io.InputStream)5 InterruptedIOException (java.io.InterruptedIOException)5 CompleteMultipartUploadRequest (com.amazonaws.services.s3.model.CompleteMultipartUploadRequest)4 S3Object (com.amazonaws.services.s3.model.S3Object)4 UploadPartRequest (com.amazonaws.services.s3.model.UploadPartRequest)4 ByteArrayInputStream (java.io.ByteArrayInputStream)4 ArrayList (java.util.ArrayList)4 AmazonS3 (com.amazonaws.services.s3.AmazonS3)3