Search in sources :

Example 31 with PutObjectRequest

use of com.amazonaws.services.s3.model.PutObjectRequest in project hadoop by apache.

the class S3AFileSystem method createEmptyObject.

// Used to create an empty file that represents an empty directory
private void createEmptyObject(final String objectName) throws AmazonClientException, AmazonServiceException, InterruptedIOException {
    final InputStream im = new InputStream() {

        @Override
        public int read() throws IOException {
            return -1;
        }
    };
    PutObjectRequest putObjectRequest = newPutObjectRequest(objectName, newObjectMetadata(0L), im);
    Upload upload = putObject(putObjectRequest);
    try {
        upload.waitForUploadResult();
    } catch (InterruptedException e) {
        throw new InterruptedIOException("Interrupted creating " + objectName);
    }
    incrementPutProgressStatistics(objectName, 0);
    instrumentation.directoryCreated();
}
Also used : InterruptedIOException(java.io.InterruptedIOException) FSDataInputStream(org.apache.hadoop.fs.FSDataInputStream) InputStream(java.io.InputStream) Upload(com.amazonaws.services.s3.transfer.Upload) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest)

Example 32 with PutObjectRequest

use of com.amazonaws.services.s3.model.PutObjectRequest in project hadoop by apache.

the class S3AFileSystem method newPutObjectRequest.

/**
   * Create a putObject request.
   * Adds the ACL and metadata
   * @param key key of object
   * @param metadata metadata header
   * @param srcfile source file
   * @return the request
   */
public PutObjectRequest newPutObjectRequest(String key, ObjectMetadata metadata, File srcfile) {
    Preconditions.checkNotNull(srcfile);
    PutObjectRequest putObjectRequest = new PutObjectRequest(bucket, key, srcfile);
    setOptionalPutRequestParameters(putObjectRequest);
    putObjectRequest.setCannedAcl(cannedACL);
    putObjectRequest.setMetadata(metadata);
    return putObjectRequest;
}
Also used : PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest)

Example 33 with PutObjectRequest

use of com.amazonaws.services.s3.model.PutObjectRequest in project hadoop by apache.

the class S3AFileSystem method innerCopyFromLocalFile.

/**
   * The src file is on the local disk.  Add it to FS at
   * the given dst name.
   *
   * This version doesn't need to create a temporary file to calculate the md5.
   * Sadly this doesn't seem to be used by the shell cp :(
   *
   * delSrc indicates if the source should be removed
   * @param delSrc whether to delete the src
   * @param overwrite whether to overwrite an existing file
   * @param src path
   * @param dst path
   * @throws IOException IO problem
   * @throws FileAlreadyExistsException the destination file exists and
   * overwrite==false
   * @throws AmazonClientException failure in the AWS SDK
   */
private void innerCopyFromLocalFile(boolean delSrc, boolean overwrite, Path src, Path dst) throws IOException, FileAlreadyExistsException, AmazonClientException {
    incrementStatistic(INVOCATION_COPY_FROM_LOCAL_FILE);
    final String key = pathToKey(dst);
    if (!overwrite && exists(dst)) {
        throw new FileAlreadyExistsException(dst + " already exists");
    }
    LOG.debug("Copying local file from {} to {}", src, dst);
    // Since we have a local file, we don't need to stream into a temporary file
    LocalFileSystem local = getLocal(getConf());
    File srcfile = local.pathToFile(src);
    final ObjectMetadata om = newObjectMetadata(srcfile.length());
    PutObjectRequest putObjectRequest = newPutObjectRequest(key, om, srcfile);
    Upload up = putObject(putObjectRequest);
    ProgressableProgressListener listener = new ProgressableProgressListener(this, key, up, null);
    up.addProgressListener(listener);
    try {
        up.waitForUploadResult();
    } catch (InterruptedException e) {
        throw new InterruptedIOException("Interrupted copying " + src + " to " + dst + ", cancelling");
    }
    listener.uploadCompleted();
    // This will delete unnecessary fake parent directories
    finishedWrite(key);
    if (delSrc) {
        local.delete(src, false);
    }
}
Also used : InterruptedIOException(java.io.InterruptedIOException) FileAlreadyExistsException(org.apache.hadoop.fs.FileAlreadyExistsException) LocalFileSystem(org.apache.hadoop.fs.LocalFileSystem) Upload(com.amazonaws.services.s3.transfer.Upload) File(java.io.File) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest)

Example 34 with PutObjectRequest

use of com.amazonaws.services.s3.model.PutObjectRequest in project hadoop by apache.

the class S3AFileSystem method putObject.

/**
   * Start a transfer-manager managed async PUT of an object,
   * incrementing the put requests and put bytes
   * counters.
   * It does not update the other counters,
   * as existing code does that as progress callbacks come in.
   * Byte length is calculated from the file length, or, if there is no
   * file, from the content length of the header.
   * Because the operation is async, any stream supplied in the request
   * must reference data (files, buffers) which stay valid until the upload
   * completes.
   * @param putObjectRequest the request
   * @return the upload initiated
   */
public Upload putObject(PutObjectRequest putObjectRequest) {
    long len;
    if (putObjectRequest.getFile() != null) {
        len = putObjectRequest.getFile().length();
    } else {
        len = putObjectRequest.getMetadata().getContentLength();
    }
    incrementPutStartStatistics(len);
    try {
        Upload upload = transfers.upload(putObjectRequest);
        incrementPutCompletedStatistics(true, len);
        return upload;
    } catch (AmazonClientException e) {
        incrementPutCompletedStatistics(false, len);
        throw e;
    }
}
Also used : AmazonClientException(com.amazonaws.AmazonClientException) Upload(com.amazonaws.services.s3.transfer.Upload)

Example 35 with PutObjectRequest

use of com.amazonaws.services.s3.model.PutObjectRequest in project exhibitor by soabase.

the class S3Utils method simpleUploadFile.

public static ObjectMetadata simpleUploadFile(S3Client client, byte[] bytes, String bucket, String key) throws Exception {
    byte[] md5 = md5(bytes, bytes.length);
    ObjectMetadata metadata = new ObjectMetadata();
    metadata.setContentLength(bytes.length);
    metadata.setLastModified(new Date());
    metadata.setContentMD5(S3Utils.toBase64(md5));
    PutObjectRequest putObjectRequest = new PutObjectRequest(bucket, key, new ByteArrayInputStream(bytes), metadata);
    PutObjectResult putObjectResult = client.putObject(putObjectRequest);
    if (!putObjectResult.getETag().equals(S3Utils.toHex(md5))) {
        throw new Exception("Unable to match MD5 for config");
    }
    return metadata;
}
Also used : ByteArrayInputStream(java.io.ByteArrayInputStream) PutObjectResult(com.amazonaws.services.s3.model.PutObjectResult) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata) Date(java.util.Date) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest)

Aggregations

PutObjectRequest (com.amazonaws.services.s3.model.PutObjectRequest)33 ObjectMetadata (com.amazonaws.services.s3.model.ObjectMetadata)21 Upload (com.amazonaws.services.s3.transfer.Upload)11 AmazonClientException (com.amazonaws.AmazonClientException)10 PutObjectResult (com.amazonaws.services.s3.model.PutObjectResult)8 Exchange (org.apache.camel.Exchange)8 Processor (org.apache.camel.Processor)8 Test (org.junit.Test)8 InputStream (java.io.InputStream)7 DataStoreException (org.apache.jackrabbit.core.data.DataStoreException)7 File (java.io.File)6 IOException (java.io.IOException)6 ByteArrayInputStream (java.io.ByteArrayInputStream)5 AmazonServiceException (com.amazonaws.AmazonServiceException)4 S3Object (com.amazonaws.services.s3.model.S3Object)4 Date (java.util.Date)4 CopyObjectRequest (com.amazonaws.services.s3.model.CopyObjectRequest)3 Copy (com.amazonaws.services.s3.transfer.Copy)3 FileInputStream (java.io.FileInputStream)3 HashMap (java.util.HashMap)3