Search in sources :

Example 21 with Transfer

use of com.amazonaws.services.s3.transfer.Transfer in project herd by FINRAOS.

the class S3DaoImpl method uploadDirectory.

@Override
public S3FileTransferResultsDto uploadDirectory(final S3FileTransferRequestParamsDto params) throws InterruptedException {
    LOGGER.info("Uploading local directory to S3... localDirectory=\"{}\" s3KeyPrefix=\"{}\" s3BucketName=\"{}\"", params.getLocalPath(), params.getS3KeyPrefix(), params.getS3BucketName());
    // Perform the transfer.
    S3FileTransferResultsDto results = performTransfer(params, new Transferer() {

        @Override
        public Transfer performTransfer(TransferManager transferManager) {
            return s3Operations.uploadDirectory(params.getS3BucketName(), params.getS3KeyPrefix(), new File(params.getLocalPath()), params.isRecursive(), new ObjectMetadataProvider() {

                @Override
                public void provideObjectMetadata(File file, ObjectMetadata metadata) {
                    prepareMetadata(params, metadata);
                }
            }, transferManager);
        }
    });
    LOGGER.info("Uploaded local directory to S3. " + "localDirectory=\"{}\" s3KeyPrefix=\"{}\" s3BucketName=\"{}\" s3KeyCount={} totalBytesTransferred={} transferDuration=\"{}\"", params.getLocalPath(), params.getS3KeyPrefix(), params.getS3BucketName(), results.getTotalFilesTransferred(), results.getTotalBytesTransferred(), HerdDateUtils.formatDuration(results.getDurationMillis()));
    logOverallTransferRate(results);
    return results;
}
Also used : TransferManager(com.amazonaws.services.s3.transfer.TransferManager) Transfer(com.amazonaws.services.s3.transfer.Transfer) ObjectMetadataProvider(com.amazonaws.services.s3.transfer.ObjectMetadataProvider) S3FileTransferResultsDto(org.finra.herd.model.dto.S3FileTransferResultsDto) File(java.io.File) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata)

Example 22 with Transfer

use of com.amazonaws.services.s3.transfer.Transfer in project herd by FINRAOS.

the class S3DaoImpl method uploadFile.

@Override
public S3FileTransferResultsDto uploadFile(final S3FileTransferRequestParamsDto params) throws InterruptedException {
    LOGGER.info("Uploading local file to S3... localPath=\"{}\" s3Key=\"{}\" s3BucketName=\"{}\"", params.getLocalPath(), params.getS3KeyPrefix(), params.getS3BucketName());
    // Perform the transfer.
    S3FileTransferResultsDto results = performTransfer(params, new Transferer() {

        @Override
        public Transfer performTransfer(TransferManager transferManager) {
            // Get a handle to the local file.
            File localFile = new File(params.getLocalPath());
            // Create and prepare the metadata.
            ObjectMetadata metadata = new ObjectMetadata();
            prepareMetadata(params, metadata);
            // Create a put request and a transfer manager with the parameters and the metadata.
            PutObjectRequest putObjectRequest = new PutObjectRequest(params.getS3BucketName(), params.getS3KeyPrefix(), localFile);
            putObjectRequest.setMetadata(metadata);
            return s3Operations.upload(putObjectRequest, transferManager);
        }
    });
    LOGGER.info("Uploaded local file to the S3. localPath=\"{}\" s3Key=\"{}\" s3BucketName=\"{}\" totalBytesTransferred={} transferDuration=\"{}\"", params.getLocalPath(), params.getS3KeyPrefix(), params.getS3BucketName(), results.getTotalBytesTransferred(), HerdDateUtils.formatDuration(results.getDurationMillis()));
    logOverallTransferRate(results);
    return results;
}
Also used : TransferManager(com.amazonaws.services.s3.transfer.TransferManager) Transfer(com.amazonaws.services.s3.transfer.Transfer) S3FileTransferResultsDto(org.finra.herd.model.dto.S3FileTransferResultsDto) File(java.io.File) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata) PutObjectRequest(com.amazonaws.services.s3.model.PutObjectRequest)

Example 23 with Transfer

use of com.amazonaws.services.s3.transfer.Transfer in project herd by FINRAOS.

the class S3DaoImpl method downloadDirectory.

@Override
public S3FileTransferResultsDto downloadDirectory(final S3FileTransferRequestParamsDto params) throws InterruptedException {
    LOGGER.info("Downloading S3 directory to the local system... s3KeyPrefix=\"{}\" s3BucketName=\"{}\" localDirectory=\"{}\"", params.getS3KeyPrefix(), params.getS3BucketName(), params.getLocalPath());
    // Note that the directory download always recursively copies sub-directories.
    // To not recurse, we would have to list the files on S3 (AmazonS3Client.html#listObjects) and manually copy them one at a time.
    // Perform the transfer.
    S3FileTransferResultsDto results = performTransfer(params, new Transferer() {

        @Override
        public Transfer performTransfer(TransferManager transferManager) {
            return s3Operations.downloadDirectory(params.getS3BucketName(), params.getS3KeyPrefix(), new File(params.getLocalPath()), transferManager);
        }
    });
    LOGGER.info("Downloaded S3 directory to the local system. " + "s3KeyPrefix=\"{}\" s3BucketName=\"{}\" localDirectory=\"{}\" s3KeyCount={} totalBytesTransferred={} transferDuration=\"{}\"", params.getS3KeyPrefix(), params.getS3BucketName(), params.getLocalPath(), results.getTotalFilesTransferred(), results.getTotalBytesTransferred(), HerdDateUtils.formatDuration(results.getDurationMillis()));
    logOverallTransferRate(results);
    return results;
}
Also used : TransferManager(com.amazonaws.services.s3.transfer.TransferManager) Transfer(com.amazonaws.services.s3.transfer.Transfer) S3FileTransferResultsDto(org.finra.herd.model.dto.S3FileTransferResultsDto) File(java.io.File)

Example 24 with Transfer

use of com.amazonaws.services.s3.transfer.Transfer in project stocator by CODAIT.

the class COSAPIClient method initiate.

@Override
public void initiate(String scheme) throws IOException, ConfigurationParseException {
    mCachedSparkOriginated = new HashMap<String, Boolean>();
    mCachedSparkJobsStatus = new HashMap<String, Boolean>();
    schemaProvided = scheme;
    Properties props = ConfigurationHandler.initialize(filesystemURI, conf, scheme);
    // Set bucket name property
    int cacheSize = conf.getInt(CACHE_SIZE, GUAVA_CACHE_SIZE_DEFAULT);
    memoryCache = MemoryCache.getInstance(cacheSize);
    mBucket = props.getProperty(COS_BUCKET_PROPERTY);
    workingDir = new Path("/user", System.getProperty("user.name")).makeQualified(filesystemURI, getWorkingDirectory());
    LOG.trace("Working directory set to {}", workingDir);
    fModeAutomaticDelete = "true".equals(props.getProperty(FMODE_AUTOMATIC_DELETE_COS_PROPERTY, "false"));
    mIsV2Signer = "true".equals(props.getProperty(V2_SIGNER_TYPE_COS_PROPERTY, "false"));
    // Define COS client
    String accessKey = props.getProperty(ACCESS_KEY_COS_PROPERTY);
    String secretKey = props.getProperty(SECRET_KEY_COS_PROPERTY);
    if (accessKey == null) {
        throw new ConfigurationParseException("Access KEY is empty. Please provide valid access key");
    }
    if (secretKey == null) {
        throw new ConfigurationParseException("Secret KEY is empty. Please provide valid secret key");
    }
    BasicAWSCredentials creds = new BasicAWSCredentials(accessKey, secretKey);
    ClientConfiguration clientConf = new ClientConfiguration();
    int maxThreads = Utils.getInt(conf, FS_COS, FS_ALT_KEYS, MAX_THREADS, DEFAULT_MAX_THREADS);
    if (maxThreads < 2) {
        LOG.warn(MAX_THREADS + " must be at least 2: forcing to 2.");
        maxThreads = 2;
    }
    int totalTasks = Utils.getInt(conf, FS_COS, FS_ALT_KEYS, MAX_TOTAL_TASKS, DEFAULT_MAX_TOTAL_TASKS);
    long keepAliveTime = Utils.getLong(conf, FS_COS, FS_ALT_KEYS, KEEPALIVE_TIME, DEFAULT_KEEPALIVE_TIME);
    threadPoolExecutor = BlockingThreadPoolExecutorService.newInstance(maxThreads, maxThreads + totalTasks, keepAliveTime, TimeUnit.SECONDS, "s3a-transfer-shared");
    unboundedThreadPool = new ThreadPoolExecutor(maxThreads, Integer.MAX_VALUE, keepAliveTime, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(), BlockingThreadPoolExecutorService.newDaemonThreadFactory("s3a-transfer-unbounded"));
    boolean secureConnections = Utils.getBoolean(conf, FS_COS, FS_ALT_KEYS, SECURE_CONNECTIONS, DEFAULT_SECURE_CONNECTIONS);
    clientConf.setProtocol(secureConnections ? Protocol.HTTPS : Protocol.HTTP);
    String proxyHost = Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, PROXY_HOST, "");
    int proxyPort = Utils.getInt(conf, FS_COS, FS_ALT_KEYS, PROXY_PORT, -1);
    if (!proxyHost.isEmpty()) {
        clientConf.setProxyHost(proxyHost);
        if (proxyPort >= 0) {
            clientConf.setProxyPort(proxyPort);
        } else {
            if (secureConnections) {
                LOG.warn("Proxy host set without port. Using HTTPS default 443");
                clientConf.setProxyPort(443);
            } else {
                LOG.warn("Proxy host set without port. Using HTTP default 80");
                clientConf.setProxyPort(80);
            }
        }
        String proxyUsername = Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, PROXY_USERNAME);
        String proxyPassword = Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, PROXY_PASSWORD);
        if ((proxyUsername == null) != (proxyPassword == null)) {
            String msg = "Proxy error: " + PROXY_USERNAME + " or " + PROXY_PASSWORD + " set without the other.";
            LOG.error(msg);
            throw new IllegalArgumentException(msg);
        }
        clientConf.setProxyUsername(proxyUsername);
        clientConf.setProxyPassword(proxyPassword);
        clientConf.setProxyDomain(Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, PROXY_DOMAIN));
        clientConf.setProxyWorkstation(Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, PROXY_WORKSTATION));
        if (LOG.isDebugEnabled()) {
            LOG.debug("Using proxy server {}:{} as user {} on " + "domain {} as workstation {}", clientConf.getProxyHost(), clientConf.getProxyPort(), String.valueOf(clientConf.getProxyUsername()), clientConf.getProxyDomain(), clientConf.getProxyWorkstation());
        }
    } else if (proxyPort >= 0) {
        String msg = "Proxy error: " + PROXY_PORT + " set without " + PROXY_HOST;
        LOG.error(msg);
        throw new IllegalArgumentException(msg);
    }
    initConnectionSettings(conf, clientConf);
    if (mIsV2Signer) {
        clientConf.withSignerOverride("S3SignerType");
    }
    mClient = new AmazonS3Client(creds, clientConf);
    final String serviceUrl = props.getProperty(ENDPOINT_URL_COS_PROPERTY);
    if (serviceUrl != null && !serviceUrl.equals(amazonDefaultEndpoint)) {
        mClient.setEndpoint(serviceUrl);
    }
    mClient.setS3ClientOptions(S3ClientOptions.builder().setPathStyleAccess(true).build());
    // Set block size property
    String mBlockSizeString = props.getProperty(BLOCK_SIZE_COS_PROPERTY, "128");
    mBlockSize = Long.valueOf(mBlockSizeString).longValue() * 1024 * 1024L;
    bufferDirectory = Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, BUFFER_DIR);
    bufferDirectoryKey = Utils.getConfigKey(conf, FS_COS, FS_ALT_KEYS, BUFFER_DIR);
    LOG.trace("Buffer directory is set to {} for the key {}", bufferDirectory, bufferDirectoryKey);
    boolean autoCreateBucket = "true".equalsIgnoreCase((props.getProperty(AUTO_BUCKET_CREATE_COS_PROPERTY, "false")));
    partSize = Utils.getLong(conf, FS_COS, FS_ALT_KEYS, MULTIPART_SIZE, DEFAULT_MULTIPART_SIZE);
    multiPartThreshold = Utils.getLong(conf, FS_COS, FS_ALT_KEYS, MIN_MULTIPART_THRESHOLD, DEFAULT_MIN_MULTIPART_THRESHOLD);
    readAhead = Utils.getLong(conf, FS_COS, FS_ALT_KEYS, READAHEAD_RANGE, DEFAULT_READAHEAD_RANGE);
    LOG.debug(READAHEAD_RANGE + ":" + readAhead);
    inputPolicy = COSInputPolicy.getPolicy(Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, INPUT_FADVISE, INPUT_FADV_NORMAL));
    initTransferManager();
    maxKeys = Utils.getInt(conf, FS_COS, FS_ALT_KEYS, MAX_PAGING_KEYS, DEFAULT_MAX_PAGING_KEYS);
    flatListingFlag = Utils.getBoolean(conf, FS_COS, FS_ALT_KEYS, FLAT_LISTING, DEFAULT_FLAT_LISTING);
    if (autoCreateBucket) {
        try {
            boolean bucketExist = mClient.doesBucketExist(mBucket);
            if (bucketExist) {
                LOG.trace("Bucket {} exists", mBucket);
            } else {
                LOG.trace("Bucket {} doesn`t exists and autocreate", mBucket);
                String mRegion = props.getProperty(REGION_COS_PROPERTY);
                if (mRegion == null) {
                    mClient.createBucket(mBucket);
                } else {
                    LOG.trace("Creating bucket {} in region {}", mBucket, mRegion);
                    mClient.createBucket(mBucket, mRegion);
                }
            }
        } catch (AmazonServiceException ase) {
            /*
        *  we ignore the BucketAlreadyExists exception since multiple processes or threads
        *  might try to create the bucket in parrallel, therefore it is expected that
        *  some will fail to create the bucket
        */
            if (!ase.getErrorCode().equals("BucketAlreadyExists")) {
                LOG.error(ase.getMessage());
                throw (ase);
            }
        } catch (Exception e) {
            LOG.error(e.getMessage());
            throw (e);
        }
    }
    initMultipartUploads(conf);
    enableMultiObjectsDelete = Utils.getBoolean(conf, FS_COS, FS_ALT_KEYS, ENABLE_MULTI_DELETE, true);
    blockUploadEnabled = Utils.getBoolean(conf, FS_COS, FS_ALT_KEYS, FAST_UPLOAD, DEFAULT_FAST_UPLOAD);
    if (blockUploadEnabled) {
        blockOutputBuffer = Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, FAST_UPLOAD_BUFFER, DEFAULT_FAST_UPLOAD_BUFFER);
        partSize = COSUtils.ensureOutputParameterInRange(MULTIPART_SIZE, partSize);
        blockFactory = COSDataBlocks.createFactory(this, blockOutputBuffer);
        blockOutputActiveBlocks = Utils.getInt(conf, FS_COS, FS_ALT_KEYS, FAST_UPLOAD_ACTIVE_BLOCKS, DEFAULT_FAST_UPLOAD_ACTIVE_BLOCKS);
        LOG.debug("Using COSBlockOutputStream with buffer = {}; block={};" + " queue limit={}", blockOutputBuffer, partSize, blockOutputActiveBlocks);
    } else {
        LOG.debug("Using COSOutputStream");
    }
}
Also used : StocatorPath(com.ibm.stocator.fs.common.StocatorPath) Path(org.apache.hadoop.fs.Path) ConfigurationParseException(com.ibm.stocator.fs.common.exception.ConfigurationParseException) Properties(java.util.Properties) LinkedBlockingQueue(java.util.concurrent.LinkedBlockingQueue) BasicAWSCredentials(com.amazonaws.auth.BasicAWSCredentials) ConfigurationParseException(com.ibm.stocator.fs.common.exception.ConfigurationParseException) AmazonServiceException(com.amazonaws.AmazonServiceException) AmazonClientException(com.amazonaws.AmazonClientException) InterruptedIOException(java.io.InterruptedIOException) AmazonS3Exception(com.amazonaws.services.s3.model.AmazonS3Exception) IOException(java.io.IOException) InvalidRequestException(org.apache.hadoop.fs.InvalidRequestException) FileNotFoundException(java.io.FileNotFoundException) COSUtils.translateException(com.ibm.stocator.fs.cos.COSUtils.translateException) AmazonS3Client(com.amazonaws.services.s3.AmazonS3Client) AmazonServiceException(com.amazonaws.AmazonServiceException) ThreadPoolExecutor(java.util.concurrent.ThreadPoolExecutor) ClientConfiguration(com.amazonaws.ClientConfiguration)

Example 25 with Transfer

use of com.amazonaws.services.s3.transfer.Transfer in project spring-integration-aws by spring-projects.

the class S3MessageHandler method handleRequestMessage.

@Override
protected Object handleRequestMessage(Message<?> requestMessage) {
    Command command = this.commandExpression.getValue(this.evaluationContext, requestMessage, Command.class);
    Assert.state(command != null, "'commandExpression' [" + this.commandExpression.getExpressionString() + "] cannot evaluate to null.");
    Transfer transfer = null;
    switch(command) {
        case UPLOAD:
            transfer = upload(requestMessage);
            break;
        case DOWNLOAD:
            transfer = download(requestMessage);
            break;
        case COPY:
            transfer = copy(requestMessage);
            break;
    }
    if (this.produceReply) {
        return transfer;
    } else {
        try {
            AmazonClientException amazonClientException = transfer.waitForException();
            if (amazonClientException != null) {
                throw amazonClientException;
            }
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
        return null;
    }
}
Also used : AmazonClientException(com.amazonaws.AmazonClientException) PersistableTransfer(com.amazonaws.services.s3.transfer.PersistableTransfer) Transfer(com.amazonaws.services.s3.transfer.Transfer)

Aggregations

S3FileTransferRequestParamsDto (org.finra.herd.model.dto.S3FileTransferRequestParamsDto)18 File (java.io.File)17 S3ObjectSummary (com.amazonaws.services.s3.model.S3ObjectSummary)14 Test (org.junit.Test)14 TransferManager (com.amazonaws.services.s3.transfer.TransferManager)13 AmazonClientException (com.amazonaws.AmazonClientException)11 AmazonServiceException (com.amazonaws.AmazonServiceException)8 Tag (com.amazonaws.services.s3.model.Tag)8 Transfer (com.amazonaws.services.s3.transfer.Transfer)8 IOException (java.io.IOException)8 AmazonS3Client (com.amazonaws.services.s3.AmazonS3Client)7 ObjectMetadata (com.amazonaws.services.s3.model.ObjectMetadata)7 S3FileTransferResultsDto (org.finra.herd.model.dto.S3FileTransferResultsDto)7 PutObjectRequest (com.amazonaws.services.s3.model.PutObjectRequest)5 ThreadPoolExecutor (java.util.concurrent.ThreadPoolExecutor)5 BusinessObjectDataKey (org.finra.herd.model.api.xml.BusinessObjectDataKey)5 StorageFile (org.finra.herd.model.api.xml.StorageFile)5 TransferState (com.amazonaws.services.s3.transfer.Transfer.TransferState)4 ArrayList (java.util.ArrayList)4 AbstractDaoTest (org.finra.herd.dao.AbstractDaoTest)4