Search in sources :

Example 51 with Region

use of com.amazonaws.services.s3.model.Region in project aws-doc-sdk-examples by awsdocs.

the class Example method main.

public static void main(String[] args) {
    final String BUCKET_NAME = "extended-client-bucket";
    final String TOPIC_NAME = "extended-client-topic";
    final String QUEUE_NAME = "extended-client-queue";
    final Regions region = Regions.DEFAULT_REGION;
    // Message threshold controls the maximum message size that will be allowed to be published
    // through SNS using the extended client. Payload of messages exceeding this value will be stored in
    // S3. The default value of this parameter is 256 KB which is the maximum message size in SNS (and SQS).
    final int EXTENDED_STORAGE_MESSAGE_SIZE_THRESHOLD = 32;
    // Initialize SNS, SQS and S3 clients
    final AmazonSNS snsClient = AmazonSNSClientBuilder.standard().withRegion(region).build();
    final AmazonSQS sqsClient = AmazonSQSClientBuilder.standard().withRegion(region).build();
    final AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withRegion(region).build();
    // Create bucket, topic, queue and subscription
    s3Client.createBucket(BUCKET_NAME);
    final String topicArn = snsClient.createTopic(new CreateTopicRequest().withName(TOPIC_NAME)).getTopicArn();
    final String queueUrl = sqsClient.createQueue(new CreateQueueRequest().withQueueName(QUEUE_NAME)).getQueueUrl();
    final String subscriptionArn = Topics.subscribeQueue(snsClient, sqsClient, topicArn, queueUrl);
    // To read message content stored in S3 transparently through SQS extended client,
    // set the RawMessageDelivery subscription attribute to TRUE
    final SetSubscriptionAttributesRequest subscriptionAttributesRequest = new SetSubscriptionAttributesRequest();
    subscriptionAttributesRequest.setSubscriptionArn(subscriptionArn);
    subscriptionAttributesRequest.setAttributeName("RawMessageDelivery");
    subscriptionAttributesRequest.setAttributeValue("TRUE");
    snsClient.setSubscriptionAttributes(subscriptionAttributesRequest);
    // Initialize SNS extended client
    // PayloadSizeThreshold triggers message content storage in S3 when the threshold is exceeded
    // To store all messages content in S3, use AlwaysThroughS3 flag
    final SNSExtendedClientConfiguration snsExtendedClientConfiguration = new SNSExtendedClientConfiguration().withPayloadSupportEnabled(s3Client, BUCKET_NAME).withPayloadSizeThreshold(EXTENDED_STORAGE_MESSAGE_SIZE_THRESHOLD);
    final AmazonSNSExtendedClient snsExtendedClient = new AmazonSNSExtendedClient(snsClient, snsExtendedClientConfiguration);
    // Publish message via SNS with storage in S3
    final String message = "This message is stored in S3 as it exceeds the threshold of 32 bytes set above.";
    snsExtendedClient.publish(topicArn, message);
    // Initialize SQS extended client
    final ExtendedClientConfiguration sqsExtendedClientConfiguration = new ExtendedClientConfiguration().withPayloadSupportEnabled(s3Client, BUCKET_NAME);
    final AmazonSQSExtendedClient sqsExtendedClient = new AmazonSQSExtendedClient(sqsClient, sqsExtendedClientConfiguration);
    // Read the message from the queue
    final ReceiveMessageResult result = sqsExtendedClient.receiveMessage(queueUrl);
    System.out.println("Received message is " + result.getMessages().get(0).getBody());
}
Also used : AmazonS3(com.amazonaws.services.s3.AmazonS3) Regions(com.amazonaws.regions.Regions) AmazonSQS(com.amazonaws.services.sqs.AmazonSQS) CreateTopicRequest(com.amazonaws.services.sns.model.CreateTopicRequest) AmazonSQSExtendedClient(com.amazon.sqs.javamessaging.AmazonSQSExtendedClient) AmazonSNS(com.amazonaws.services.sns.AmazonSNS) AmazonSNSExtendedClient(software.amazon.sns.AmazonSNSExtendedClient) CreateQueueRequest(com.amazonaws.services.sqs.model.CreateQueueRequest) SetSubscriptionAttributesRequest(com.amazonaws.services.sns.model.SetSubscriptionAttributesRequest) SNSExtendedClientConfiguration(software.amazon.sns.SNSExtendedClientConfiguration) ExtendedClientConfiguration(com.amazon.sqs.javamessaging.ExtendedClientConfiguration) SNSExtendedClientConfiguration(software.amazon.sns.SNSExtendedClientConfiguration) ReceiveMessageResult(com.amazonaws.services.sqs.model.ReceiveMessageResult)

Example 52 with Region

use of com.amazonaws.services.s3.model.Region in project aws-doc-sdk-examples by awsdocs.

the class App method main.

public static void main(String[] args) {
    if (args.length < 2) {
        System.out.format("Usage: <the bucket name> <the AWS Region to use>\n" + "Example: my-test-bucket us-east-2\n");
        return;
    }
    String bucket_name = args[0];
    String region = args[1];
    s3 = AmazonS3ClientBuilder.standard().withCredentials(new ProfileCredentialsProvider()).withRegion(region).build();
    // List current buckets.
    ListMyBuckets();
    // Create the bucket.
    if (s3.doesBucketExistV2(bucket_name)) {
        System.out.format("\nCannot create the bucket. \n" + "A bucket named '%s' already exists.", bucket_name);
        return;
    } else {
        try {
            System.out.format("\nCreating a new bucket named '%s'...\n\n", bucket_name);
            s3.createBucket(new CreateBucketRequest(bucket_name, region));
        } catch (AmazonS3Exception e) {
            System.err.println(e.getErrorMessage());
        }
    }
    // Confirm that the bucket was created.
    ListMyBuckets();
    // Delete the bucket.
    try {
        System.out.format("\nDeleting the bucket named '%s'...\n\n", bucket_name);
        s3.deleteBucket(bucket_name);
    } catch (AmazonS3Exception e) {
        System.err.println(e.getErrorMessage());
    }
    // Confirm that the bucket was deleted.
    ListMyBuckets();
}
Also used : CreateBucketRequest(com.amazonaws.services.s3.model.CreateBucketRequest) ProfileCredentialsProvider(com.amazonaws.auth.profile.ProfileCredentialsProvider) AmazonS3Exception(com.amazonaws.services.s3.model.AmazonS3Exception)

Example 53 with Region

use of com.amazonaws.services.s3.model.Region in project apex-malhar by apache.

the class S3Reconciler method setup.

@Override
public void setup(Context.OperatorContext context) {
    s3client = new AmazonS3Client(new BasicAWSCredentials(accessKey, secretKey));
    if (region != null) {
        s3client.setRegion(Region.getRegion(Regions.fromName(region)));
    }
    filePath = context.getValue(DAG.APPLICATION_PATH);
    try {
        fs = FileSystem.newInstance(new Path(filePath).toUri(), new Configuration());
    } catch (IOException e) {
        logger.error("Unable to create FileSystem: {}", e.getMessage());
    }
    super.setup(context);
}
Also used : Path(org.apache.hadoop.fs.Path) AmazonS3Client(com.amazonaws.services.s3.AmazonS3Client) Configuration(org.apache.hadoop.conf.Configuration) IOException(java.io.IOException) BasicAWSCredentials(com.amazonaws.auth.BasicAWSCredentials)

Example 54 with Region

use of com.amazonaws.services.s3.model.Region in project stocator by SparkTC.

the class COSAPIClient method initiate.

@Override
public void initiate(String scheme) throws IOException, ConfigurationParseException {
    mCachedSparkOriginated = new ConcurrentHashMap<String, Boolean>();
    mCachedSparkJobsStatus = new HashMap<String, Boolean>();
    schemaProvided = scheme;
    Properties props = ConfigurationHandler.initialize(filesystemURI, conf, scheme);
    // Set bucket name property
    int cacheSize = conf.getInt(CACHE_SIZE, GUAVA_CACHE_SIZE_DEFAULT);
    memoryCache = MemoryCache.getInstance(cacheSize);
    mBucket = props.getProperty(COS_BUCKET_PROPERTY);
    workingDir = new Path("/user", System.getProperty("user.name")).makeQualified(filesystemURI, getWorkingDirectory());
    LOG.trace("Working directory set to {}", workingDir);
    fModeAutomaticDelete = "true".equals(conf.get(FS_STOCATOR_FMODE_DATA_CLEANUP, FS_STOCATOR_FMODE_DATA_CLEANUP_DEFAULT));
    mIsV2Signer = "true".equals(props.getProperty(V2_SIGNER_TYPE_COS_PROPERTY, "false"));
    // Define COS client
    String accessKey = props.getProperty(ACCESS_KEY_COS_PROPERTY);
    String secretKey = props.getProperty(SECRET_KEY_COS_PROPERTY);
    String sessionToken = props.getProperty(SESSION_TOKEN_COS_PROPERTY);
    if (accessKey == null) {
        throw new ConfigurationParseException("Access KEY is empty. Please provide valid access key");
    }
    if (secretKey == null) {
        throw new ConfigurationParseException("Secret KEY is empty. Please provide valid secret key");
    }
    AWSCredentials creds;
    if (sessionToken == null) {
        creds = new BasicAWSCredentials(accessKey, secretKey);
    } else {
        creds = new BasicSessionCredentials(accessKey, secretKey, sessionToken);
    }
    ClientConfiguration clientConf = new ClientConfiguration();
    int maxThreads = Utils.getInt(conf, FS_COS, FS_ALT_KEYS, MAX_THREADS, DEFAULT_MAX_THREADS);
    if (maxThreads < 2) {
        LOG.warn(MAX_THREADS + " must be at least 2: forcing to 2.");
        maxThreads = 2;
    }
    int totalTasks = Utils.getInt(conf, FS_COS, FS_ALT_KEYS, MAX_TOTAL_TASKS, DEFAULT_MAX_TOTAL_TASKS);
    long keepAliveTime = Utils.getLong(conf, FS_COS, FS_ALT_KEYS, KEEPALIVE_TIME, DEFAULT_KEEPALIVE_TIME);
    threadPoolExecutor = BlockingThreadPoolExecutorService.newInstance(maxThreads, maxThreads + totalTasks, keepAliveTime, TimeUnit.SECONDS, "s3a-transfer-shared");
    unboundedThreadPool = new ThreadPoolExecutor(maxThreads, Integer.MAX_VALUE, keepAliveTime, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(), BlockingThreadPoolExecutorService.newDaemonThreadFactory("s3a-transfer-unbounded"));
    boolean secureConnections = Utils.getBoolean(conf, FS_COS, FS_ALT_KEYS, SECURE_CONNECTIONS, DEFAULT_SECURE_CONNECTIONS);
    clientConf.setProtocol(secureConnections ? Protocol.HTTPS : Protocol.HTTP);
    String proxyHost = Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, PROXY_HOST, "");
    int proxyPort = Utils.getInt(conf, FS_COS, FS_ALT_KEYS, PROXY_PORT, -1);
    if (!proxyHost.isEmpty()) {
        clientConf.setProxyHost(proxyHost);
        if (proxyPort >= 0) {
            clientConf.setProxyPort(proxyPort);
        } else {
            if (secureConnections) {
                LOG.warn("Proxy host set without port. Using HTTPS default 443");
                clientConf.setProxyPort(443);
            } else {
                LOG.warn("Proxy host set without port. Using HTTP default 80");
                clientConf.setProxyPort(80);
            }
        }
        String proxyUsername = Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, PROXY_USERNAME);
        String proxyPassword = Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, PROXY_PASSWORD);
        if ((proxyUsername == null) != (proxyPassword == null)) {
            String msg = "Proxy error: " + PROXY_USERNAME + " or " + PROXY_PASSWORD + " set without the other.";
            LOG.error(msg);
            throw new IllegalArgumentException(msg);
        }
        clientConf.setProxyUsername(proxyUsername);
        clientConf.setProxyPassword(proxyPassword);
        clientConf.setProxyDomain(Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, PROXY_DOMAIN));
        clientConf.setProxyWorkstation(Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, PROXY_WORKSTATION));
        if (LOG.isDebugEnabled()) {
            LOG.debug("Using proxy server {}:{} as user {} on " + "domain {} as workstation {}", clientConf.getProxyHost(), clientConf.getProxyPort(), String.valueOf(clientConf.getProxyUsername()), clientConf.getProxyDomain(), clientConf.getProxyWorkstation());
        }
    } else if (proxyPort >= 0) {
        String msg = "Proxy error: " + PROXY_PORT + " set without " + PROXY_HOST;
        LOG.error(msg);
        throw new IllegalArgumentException(msg);
    }
    initConnectionSettings(conf, clientConf);
    if (mIsV2Signer) {
        clientConf.withSignerOverride("S3SignerType");
    }
    mClient = new AmazonS3Client(creds, clientConf);
    final String serviceUrl = props.getProperty(ENDPOINT_URL_COS_PROPERTY);
    if (serviceUrl != null && !serviceUrl.equals(amazonDefaultEndpoint)) {
        mClient.setEndpoint(serviceUrl);
    }
    mClient.setS3ClientOptions(S3ClientOptions.builder().setPathStyleAccess(true).build());
    // Set block size property
    String mBlockSizeString = props.getProperty(BLOCK_SIZE_COS_PROPERTY, "128");
    mBlockSize = Long.valueOf(mBlockSizeString).longValue() * 1024 * 1024L;
    bufferDirectory = Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, BUFFER_DIR);
    bufferDirectoryKey = Utils.getConfigKey(conf, FS_COS, FS_ALT_KEYS, BUFFER_DIR);
    LOG.trace("Buffer directory is set to {} for the key {}", bufferDirectory, bufferDirectoryKey);
    boolean autoCreateBucket = "true".equalsIgnoreCase((props.getProperty(AUTO_BUCKET_CREATE_COS_PROPERTY, "false")));
    partSize = Utils.getLong(conf, FS_COS, FS_ALT_KEYS, MULTIPART_SIZE, DEFAULT_MULTIPART_SIZE);
    multiPartThreshold = Utils.getLong(conf, FS_COS, FS_ALT_KEYS, MIN_MULTIPART_THRESHOLD, DEFAULT_MIN_MULTIPART_THRESHOLD);
    readAhead = Utils.getLong(conf, FS_COS, FS_ALT_KEYS, READAHEAD_RANGE, DEFAULT_READAHEAD_RANGE);
    LOG.debug(READAHEAD_RANGE + ":" + readAhead);
    inputPolicy = COSInputPolicy.getPolicy(Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, INPUT_FADVISE, INPUT_FADV_NORMAL));
    initTransferManager();
    maxKeys = Utils.getInt(conf, FS_COS, FS_ALT_KEYS, MAX_PAGING_KEYS, DEFAULT_MAX_PAGING_KEYS);
    flatListingFlag = Utils.getBoolean(conf, FS_COS, FS_ALT_KEYS, FLAT_LISTING, DEFAULT_FLAT_LISTING);
    if (autoCreateBucket) {
        try {
            boolean bucketExist = mClient.doesBucketExist(mBucket);
            if (bucketExist) {
                LOG.trace("Bucket {} exists", mBucket);
            } else {
                LOG.trace("Bucket {} doesn`t exists and autocreate", mBucket);
                String mRegion = props.getProperty(REGION_COS_PROPERTY);
                if (mRegion == null) {
                    mClient.createBucket(mBucket);
                } else {
                    LOG.trace("Creating bucket {} in region {}", mBucket, mRegion);
                    mClient.createBucket(mBucket, mRegion);
                }
            }
        } catch (AmazonServiceException ase) {
            /*
        *  we ignore the BucketAlreadyExists exception since multiple processes or threads
        *  might try to create the bucket in parrallel, therefore it is expected that
        *  some will fail to create the bucket
        */
            if (!ase.getErrorCode().equals("BucketAlreadyExists")) {
                LOG.error(ase.getMessage());
                throw (ase);
            }
        } catch (Exception e) {
            LOG.error(e.getMessage());
            throw (e);
        }
    }
    initMultipartUploads(conf);
    enableMultiObjectsDelete = Utils.getBoolean(conf, FS_COS, FS_ALT_KEYS, ENABLE_MULTI_DELETE, true);
    blockUploadEnabled = Utils.getBoolean(conf, FS_COS, FS_ALT_KEYS, FAST_UPLOAD, DEFAULT_FAST_UPLOAD);
    if (blockUploadEnabled) {
        blockOutputBuffer = Utils.getTrimmed(conf, FS_COS, FS_ALT_KEYS, FAST_UPLOAD_BUFFER, DEFAULT_FAST_UPLOAD_BUFFER);
        partSize = COSUtils.ensureOutputParameterInRange(MULTIPART_SIZE, partSize);
        blockFactory = COSDataBlocks.createFactory(this, blockOutputBuffer);
        blockOutputActiveBlocks = Utils.getInt(conf, FS_COS, FS_ALT_KEYS, FAST_UPLOAD_ACTIVE_BLOCKS, DEFAULT_FAST_UPLOAD_ACTIVE_BLOCKS);
        LOG.debug("Using COSBlockOutputStream with buffer = {}; block={};" + " queue limit={}", blockOutputBuffer, partSize, blockOutputActiveBlocks);
    } else {
        LOG.debug("Using COSOutputStream");
    }
    atomicWriteEnabled = Utils.getBoolean(conf, FS_COS, FS_ALT_KEYS, ATOMIC_WRITE, DEFAULT_ATOMIC_WRITE);
}
Also used : StocatorPath(com.ibm.stocator.fs.common.StocatorPath) Path(org.apache.hadoop.fs.Path) BasicSessionCredentials(com.amazonaws.auth.BasicSessionCredentials) ConfigurationParseException(com.ibm.stocator.fs.common.exception.ConfigurationParseException) Properties(java.util.Properties) LinkedBlockingQueue(java.util.concurrent.LinkedBlockingQueue) AWSCredentials(com.amazonaws.auth.AWSCredentials) BasicAWSCredentials(com.amazonaws.auth.BasicAWSCredentials) BasicAWSCredentials(com.amazonaws.auth.BasicAWSCredentials) ConfigurationParseException(com.ibm.stocator.fs.common.exception.ConfigurationParseException) AmazonServiceException(com.amazonaws.AmazonServiceException) AmazonClientException(com.amazonaws.AmazonClientException) InterruptedIOException(java.io.InterruptedIOException) AmazonS3Exception(com.amazonaws.services.s3.model.AmazonS3Exception) IOException(java.io.IOException) FileNotFoundException(java.io.FileNotFoundException) COSUtils.translateException(com.ibm.stocator.fs.cos.COSUtils.translateException) UnsupportedEncodingException(java.io.UnsupportedEncodingException) AmazonS3Client(com.amazonaws.services.s3.AmazonS3Client) AmazonServiceException(com.amazonaws.AmazonServiceException) ThreadPoolExecutor(java.util.concurrent.ThreadPoolExecutor) ClientConfiguration(com.amazonaws.ClientConfiguration)

Example 55 with Region

use of com.amazonaws.services.s3.model.Region in project stocator by SparkTC.

the class COSUtils method translateException.

/**
 * Translate an exception raised in an operation into an IOException. The
 * specific type of IOException depends on the class of
 * {@link AmazonClientException} passed in, and any status codes included in
 * the operation. That is: HTTP error codes are examined and can be used to
 * build a more specific response.
 *
 * @param operation operation
 * @param path path operated on (may be null)
 * @param exception amazon exception raised
 * @return an IOE which wraps the caught exception
 */
@SuppressWarnings("ThrowableInstanceNeverThrown")
public static IOException translateException(String operation, String path, AmazonClientException exception) {
    String message = String.format("%s%s: %s", operation, path != null ? (" on " + path) : "", exception);
    if (!(exception instanceof AmazonServiceException)) {
        if (containsInterruptedException(exception)) {
            return (IOException) new InterruptedIOException(message).initCause(exception);
        }
        return new COSClientIOException(message, exception);
    } else {
        IOException ioe;
        AmazonServiceException ase = (AmazonServiceException) exception;
        // this exception is non-null if the service exception is an COS one
        AmazonS3Exception s3Exception = ase instanceof AmazonS3Exception ? (AmazonS3Exception) ase : null;
        int status = ase.getStatusCode();
        switch(status) {
            case 301:
                if (s3Exception != null) {
                    if (s3Exception.getAdditionalDetails() != null && s3Exception.getAdditionalDetails().containsKey(ENDPOINT_KEY)) {
                        message = String.format("Received permanent redirect response to " + "endpoint %s.  This likely indicates that the COS endpoint " + "configured in %s does not match the region containing " + "the bucket.", s3Exception.getAdditionalDetails().get(ENDPOINT_KEY), ENDPOINT_URL);
                    }
                    ioe = new COSIOException(message, s3Exception);
                } else {
                    ioe = new COSServiceIOException(message, ase);
                }
                break;
            // permissions
            case 401:
            case 403:
                ioe = new AccessDeniedException(path, null, message);
                ioe.initCause(ase);
                break;
            // the object isn't there
            case 404:
            case 410:
                ioe = new FileNotFoundException(message);
                ioe.initCause(ase);
                break;
            // a shorter one while it is being read.
            case 416:
                ioe = new EOFException(message);
                break;
            default:
                // no specific exit code. Choose an IOE subclass based on the class
                // of the caught exception
                ioe = s3Exception != null ? new COSIOException(message, s3Exception) : new COSServiceIOException(message, ase);
                break;
        }
        return ioe;
    }
}
Also used : COSClientIOException(com.ibm.stocator.fs.cos.exception.COSClientIOException) InterruptedIOException(java.io.InterruptedIOException) AccessDeniedException(java.nio.file.AccessDeniedException) COSIOException(com.ibm.stocator.fs.cos.exception.COSIOException) COSServiceIOException(com.ibm.stocator.fs.cos.exception.COSServiceIOException) AmazonServiceException(com.amazonaws.AmazonServiceException) FileNotFoundException(java.io.FileNotFoundException) EOFException(java.io.EOFException) COSServiceIOException(com.ibm.stocator.fs.cos.exception.COSServiceIOException) IOException(java.io.IOException) InterruptedIOException(java.io.InterruptedIOException) COSIOException(com.ibm.stocator.fs.cos.exception.COSIOException) COSClientIOException(com.ibm.stocator.fs.cos.exception.COSClientIOException) AmazonS3Exception(com.amazonaws.services.s3.model.AmazonS3Exception)

Aggregations

AmazonS3 (com.amazonaws.services.s3.AmazonS3)18 AmazonS3Client (com.amazonaws.services.s3.AmazonS3Client)17 IOException (java.io.IOException)12 AmazonServiceException (com.amazonaws.AmazonServiceException)11 AmazonS3Exception (com.amazonaws.services.s3.model.AmazonS3Exception)11 Test (org.junit.Test)10 AmazonClientException (com.amazonaws.AmazonClientException)9 BasicAWSCredentials (com.amazonaws.auth.BasicAWSCredentials)9 Regions (com.amazonaws.regions.Regions)9 HashMap (java.util.HashMap)9 Date (java.util.Date)8 Map (java.util.Map)8 ClientConfiguration (com.amazonaws.ClientConfiguration)7 AmazonS3ClientBuilder (com.amazonaws.services.s3.AmazonS3ClientBuilder)7 S3Object (com.amazonaws.services.s3.model.S3Object)7 AWSKMS (com.amazonaws.services.kms.AWSKMS)6 TransferManager (com.amazonaws.services.s3.transfer.TransferManager)6 ByteArrayInputStream (java.io.ByteArrayInputStream)6 FileNotFoundException (java.io.FileNotFoundException)6 InputStream (java.io.InputStream)6