Search in sources :

Example 21 with PutObjectRequest

use of com.amazonaws.services.s3.model.PutObjectRequest in project jackrabbit-oak by apache.

the class S3Backend method addMetadataRecord.

@Override
public void addMetadataRecord(final InputStream input, final String name) throws DataStoreException {
    ClassLoader contextClassLoader = Thread.currentThread().getContextClassLoader();
    try {
        Thread.currentThread().setContextClassLoader(getClass().getClassLoader());
        Upload upload = tmx.upload(s3ReqDecorator.decorate(new PutObjectRequest(bucket, addMetaKeyPrefix(name), input, new ObjectMetadata())));
        upload.waitForUploadResult();
    } catch (InterruptedException e) {
        LOG.error("Error in uploading", e);
        throw new DataStoreException("Error in uploading", e);
    } finally {
        if (contextClassLoader != null) {
            Thread.currentThread().setContextClassLoader(contextClassLoader);
        }
    }
}
Also used : DataStoreException(org.apache.jackrabbit.core.data.DataStoreException) Upload(com.amazonaws.services.s3.transfer.Upload) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest)

Example 22 with PutObjectRequest

use of com.amazonaws.services.s3.model.PutObjectRequest in project uPortal by Jasig.

the class AwsS3DynamicSkinService method saveContentToAwsS3Bucket.

private void saveContentToAwsS3Bucket(final String objectKey, final String content, final DynamicSkinInstanceData data) {
    final InputStream inputStream = IOUtils.toInputStream(content);
    final ObjectMetadata objectMetadata = this.createObjectMetadata(content, data);
    final PutObjectRequest putObjectRequest = this.createPutObjectRequest(objectKey, inputStream, objectMetadata);
    log.info(ATTEMPTING_TO_SAVE_FILE_TO_AWS_S3_LOG_MSG, this.awsS3BucketConfig.getBucketName(), objectKey);
    this.saveContentToAwsS3Bucket(putObjectRequest);
    log.info(FILE_SAVED_TO_AWS_S3_LOG_MSG, this.awsS3BucketConfig.getBucketName(), objectKey);
}
Also used : InputStream(java.io.InputStream) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest)

Example 23 with PutObjectRequest

use of com.amazonaws.services.s3.model.PutObjectRequest in project zeppelin by apache.

the class S3NotebookRepo method save.

@Override
public void save(Note note, AuthenticationInfo subject) throws IOException {
    GsonBuilder gsonBuilder = new GsonBuilder();
    gsonBuilder.setPrettyPrinting();
    Gson gson = gsonBuilder.create();
    String json = gson.toJson(note);
    String key = user + "/" + "notebook" + "/" + note.getId() + "/" + "note.json";
    File file = File.createTempFile("note", "json");
    try {
        Writer writer = new OutputStreamWriter(new FileOutputStream(file));
        writer.write(json);
        writer.close();
        PutObjectRequest putRequest = new PutObjectRequest(bucketName, key, file);
        if (useServerSideEncryption) {
            // Request server-side encryption.
            ObjectMetadata objectMetadata = new ObjectMetadata();
            objectMetadata.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);
            putRequest.setMetadata(objectMetadata);
        }
        s3client.putObject(putRequest);
    } catch (AmazonClientException ace) {
        throw new IOException("Unable to store note in S3: " + ace, ace);
    } finally {
        FileUtils.deleteQuietly(file);
    }
}
Also used : GsonBuilder(com.google.gson.GsonBuilder) FileOutputStream(java.io.FileOutputStream) AmazonClientException(com.amazonaws.AmazonClientException) Gson(com.google.gson.Gson) OutputStreamWriter(java.io.OutputStreamWriter) IOException(java.io.IOException) File(java.io.File) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata) OutputStreamWriter(java.io.OutputStreamWriter) Writer(java.io.Writer) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest)

Example 24 with PutObjectRequest

use of com.amazonaws.services.s3.model.PutObjectRequest in project YCSB by brianfrankcooper.

the class S3Client method writeToStorage.

/**
  * Upload a new object to S3 or update an object on S3.
  *
  * @param bucket
  *            The name of the bucket
  * @param key
  *            The file key of the object to upload/update.
  * @param values
  *            The data to be written on the object
  * @param updateMarker
  *            A boolean value. If true a new object will be uploaded
  *            to S3. If false an existing object will be re-uploaded
  *
  */
protected Status writeToStorage(String bucket, String key, HashMap<String, ByteIterator> values, Boolean updateMarker, String sseLocal, SSECustomerKey ssecLocal) {
    int totalSize = 0;
    //number of fields to concatenate
    int fieldCount = values.size();
    // getting the first field in the values
    Object keyToSearch = values.keySet().toArray()[0];
    // getting the content of just one field
    byte[] sourceArray = values.get(keyToSearch).toArray();
    //size of each array
    int sizeArray = sourceArray.length;
    if (updateMarker) {
        totalSize = sizeArray * fieldCount;
    } else {
        try {
            Map.Entry<S3Object, ObjectMetadata> objectAndMetadata = getS3ObjectAndMetadata(bucket, key, ssecLocal);
            int sizeOfFile = (int) objectAndMetadata.getValue().getContentLength();
            fieldCount = sizeOfFile / sizeArray;
            totalSize = sizeOfFile;
            objectAndMetadata.getKey().close();
        } catch (Exception e) {
            System.err.println("Not possible to get the object :" + key);
            e.printStackTrace();
            return Status.ERROR;
        }
    }
    byte[] destinationArray = new byte[totalSize];
    int offset = 0;
    for (int i = 0; i < fieldCount; i++) {
        System.arraycopy(sourceArray, 0, destinationArray, offset, sizeArray);
        offset += sizeArray;
    }
    try (InputStream input = new ByteArrayInputStream(destinationArray)) {
        ObjectMetadata metadata = new ObjectMetadata();
        metadata.setContentLength(totalSize);
        PutObjectRequest putObjectRequest = null;
        if (sseLocal.equals("true")) {
            metadata.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION);
            putObjectRequest = new PutObjectRequest(bucket, key, input, metadata);
        } else if (ssecLocal != null) {
            putObjectRequest = new PutObjectRequest(bucket, key, input, metadata).withSSECustomerKey(ssecLocal);
        } else {
            putObjectRequest = new PutObjectRequest(bucket, key, input, metadata);
        }
        try {
            PutObjectResult res = s3Client.putObject(putObjectRequest);
            if (res.getETag() == null) {
                return Status.ERROR;
            } else {
                if (sseLocal.equals("true")) {
                    System.out.println("Uploaded object encryption status is " + res.getSSEAlgorithm());
                } else if (ssecLocal != null) {
                    System.out.println("Uploaded object encryption status is " + res.getSSEAlgorithm());
                }
            }
        } catch (Exception e) {
            System.err.println("Not possible to write object :" + key);
            e.printStackTrace();
            return Status.ERROR;
        }
    } catch (Exception e) {
        System.err.println("Error in the creation of the stream :" + e.toString());
        e.printStackTrace();
        return Status.ERROR;
    }
    return Status.OK;
}
Also used : ByteArrayInputStream(java.io.ByteArrayInputStream) PutObjectResult(com.amazonaws.services.s3.model.PutObjectResult) ByteArrayInputStream(java.io.ByteArrayInputStream) InputStream(java.io.InputStream) S3Object(com.amazonaws.services.s3.model.S3Object) S3Object(com.amazonaws.services.s3.model.S3Object) HashMap(java.util.HashMap) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata) DBException(com.yahoo.ycsb.DBException) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest)

Example 25 with PutObjectRequest

use of com.amazonaws.services.s3.model.PutObjectRequest in project camel by apache.

the class S3Producer method processMultiPart.

public void processMultiPart(final Exchange exchange) throws Exception {
    File filePayload = null;
    Object obj = exchange.getIn().getMandatoryBody();
    // Need to check if the message body is WrappedFile
    if (obj instanceof WrappedFile) {
        obj = ((WrappedFile<?>) obj).getFile();
    }
    if (obj instanceof File) {
        filePayload = (File) obj;
    } else {
        throw new InvalidArgumentException("aws-s3: MultiPart upload requires a File input.");
    }
    ObjectMetadata objectMetadata = determineMetadata(exchange);
    if (objectMetadata.getContentLength() == 0) {
        objectMetadata.setContentLength(filePayload.length());
    }
    final String keyName = determineKey(exchange);
    final InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(getConfiguration().getBucketName(), keyName, objectMetadata);
    String storageClass = determineStorageClass(exchange);
    if (storageClass != null) {
        initRequest.setStorageClass(StorageClass.fromValue(storageClass));
    }
    String cannedAcl = exchange.getIn().getHeader(S3Constants.CANNED_ACL, String.class);
    if (cannedAcl != null) {
        CannedAccessControlList objectAcl = CannedAccessControlList.valueOf(cannedAcl);
        initRequest.setCannedACL(objectAcl);
    }
    AccessControlList acl = exchange.getIn().getHeader(S3Constants.ACL, AccessControlList.class);
    if (acl != null) {
        // note: if cannedacl and acl are both specified the last one will be used. refer to
        // PutObjectRequest#setAccessControlList for more details
        initRequest.setAccessControlList(acl);
    }
    LOG.trace("Initiating multipart upload [{}] from exchange [{}]...", initRequest, exchange);
    final InitiateMultipartUploadResult initResponse = getEndpoint().getS3Client().initiateMultipartUpload(initRequest);
    final long contentLength = objectMetadata.getContentLength();
    final List<PartETag> partETags = new ArrayList<PartETag>();
    long partSize = getConfiguration().getPartSize();
    CompleteMultipartUploadResult uploadResult = null;
    long filePosition = 0;
    try {
        for (int part = 1; filePosition < contentLength; part++) {
            partSize = Math.min(partSize, contentLength - filePosition);
            UploadPartRequest uploadRequest = new UploadPartRequest().withBucketName(getConfiguration().getBucketName()).withKey(keyName).withUploadId(initResponse.getUploadId()).withPartNumber(part).withFileOffset(filePosition).withFile(filePayload).withPartSize(partSize);
            LOG.trace("Uploading part [{}] for {}", part, keyName);
            partETags.add(getEndpoint().getS3Client().uploadPart(uploadRequest).getPartETag());
            filePosition += partSize;
        }
        CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(getConfiguration().getBucketName(), keyName, initResponse.getUploadId(), partETags);
        uploadResult = getEndpoint().getS3Client().completeMultipartUpload(compRequest);
    } catch (Exception e) {
        getEndpoint().getS3Client().abortMultipartUpload(new AbortMultipartUploadRequest(getConfiguration().getBucketName(), keyName, initResponse.getUploadId()));
        throw e;
    }
    Message message = getMessageForResponse(exchange);
    message.setHeader(S3Constants.E_TAG, uploadResult.getETag());
    if (uploadResult.getVersionId() != null) {
        message.setHeader(S3Constants.VERSION_ID, uploadResult.getVersionId());
    }
    if (getConfiguration().isDeleteAfterWrite() && filePayload != null) {
        FileUtil.deleteFile(filePayload);
    }
}
Also used : CannedAccessControlList(com.amazonaws.services.s3.model.CannedAccessControlList) AccessControlList(com.amazonaws.services.s3.model.AccessControlList) InitiateMultipartUploadResult(com.amazonaws.services.s3.model.InitiateMultipartUploadResult) Message(org.apache.camel.Message) InitiateMultipartUploadRequest(com.amazonaws.services.s3.model.InitiateMultipartUploadRequest) ArrayList(java.util.ArrayList) UploadPartRequest(com.amazonaws.services.s3.model.UploadPartRequest) AbortMultipartUploadRequest(com.amazonaws.services.s3.model.AbortMultipartUploadRequest) CompleteMultipartUploadResult(com.amazonaws.services.s3.model.CompleteMultipartUploadResult) CannedAccessControlList(com.amazonaws.services.s3.model.CannedAccessControlList) PartETag(com.amazonaws.services.s3.model.PartETag) Endpoint(org.apache.camel.Endpoint) InvalidArgumentException(com.amazonaws.services.cloudfront.model.InvalidArgumentException) InvalidArgumentException(com.amazonaws.services.cloudfront.model.InvalidArgumentException) WrappedFile(org.apache.camel.WrappedFile) File(java.io.File) WrappedFile(org.apache.camel.WrappedFile) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata) CompleteMultipartUploadRequest(com.amazonaws.services.s3.model.CompleteMultipartUploadRequest)

Aggregations

PutObjectRequest (com.amazonaws.services.s3.model.PutObjectRequest)33 ObjectMetadata (com.amazonaws.services.s3.model.ObjectMetadata)21 Upload (com.amazonaws.services.s3.transfer.Upload)11 AmazonClientException (com.amazonaws.AmazonClientException)10 PutObjectResult (com.amazonaws.services.s3.model.PutObjectResult)8 Exchange (org.apache.camel.Exchange)8 Processor (org.apache.camel.Processor)8 Test (org.junit.Test)8 InputStream (java.io.InputStream)7 DataStoreException (org.apache.jackrabbit.core.data.DataStoreException)7 File (java.io.File)6 IOException (java.io.IOException)6 ByteArrayInputStream (java.io.ByteArrayInputStream)5 AmazonServiceException (com.amazonaws.AmazonServiceException)4 S3Object (com.amazonaws.services.s3.model.S3Object)4 Date (java.util.Date)4 CopyObjectRequest (com.amazonaws.services.s3.model.CopyObjectRequest)3 Copy (com.amazonaws.services.s3.transfer.Copy)3 FileInputStream (java.io.FileInputStream)3 HashMap (java.util.HashMap)3