Search in sources :

Example 6 with AmazonClientException

use of com.amazonaws.AmazonClientException in project hadoop by apache.

the class S3AFileSystem method uploadPart.

/**
   * Upload part of a multi-partition file.
   * Increments the write and put counters.
   * <i>Important: this call does not close any input stream in the request.</i>
   * @param request request
   * @return the result of the operation.
   * @throws AmazonClientException on problems
   */
public UploadPartResult uploadPart(UploadPartRequest request) throws AmazonClientException {
    long len = request.getPartSize();
    incrementPutStartStatistics(len);
    try {
        UploadPartResult uploadPartResult = s3.uploadPart(request);
        incrementPutCompletedStatistics(true, len);
        return uploadPartResult;
    } catch (AmazonClientException e) {
        incrementPutCompletedStatistics(false, len);
        throw e;
    }
}
Also used : UploadPartResult(com.amazonaws.services.s3.model.UploadPartResult) AmazonClientException(com.amazonaws.AmazonClientException)

Example 7 with AmazonClientException

use of com.amazonaws.AmazonClientException in project hadoop by apache.

the class S3AFileSystem method listFiles.

/**
   * {@inheritDoc}.
   *
   * This implementation is optimized for S3, which can do a bulk listing
   * off all entries under a path in one single operation. Thus there is
   * no need to recursively walk the directory tree.
   *
   * Instead a {@link ListObjectsRequest} is created requesting a (windowed)
   * listing of all entries under the given path. This is used to construct
   * an {@code ObjectListingIterator} instance, iteratively returning the
   * sequence of lists of elements under the path. This is then iterated
   * over in a {@code FileStatusListingIterator}, which generates
   * {@link S3AFileStatus} instances, one per listing entry.
   * These are then translated into {@link LocatedFileStatus} instances.
   *
   * This is essentially a nested and wrapped set of iterators, with some
   * generator classes; an architecture which may become less convoluted
   * using lambda-expressions.
   * @param f a path
   * @param recursive if the subdirectories need to be traversed recursively
   *
   * @return an iterator that traverses statuses of the files/directories
   *         in the given path
   * @throws FileNotFoundException if {@code path} does not exist
   * @throws IOException if any I/O error occurred
   */
@Override
public RemoteIterator<LocatedFileStatus> listFiles(Path f, boolean recursive) throws FileNotFoundException, IOException {
    incrementStatistic(INVOCATION_LIST_FILES);
    Path path = qualify(f);
    LOG.debug("listFiles({}, {})", path, recursive);
    try {
        // lookup dir triggers existence check
        final FileStatus fileStatus = getFileStatus(path);
        if (fileStatus.isFile()) {
            // simple case: File
            LOG.debug("Path is a file");
            return new Listing.SingleStatusRemoteIterator(toLocatedFileStatus(fileStatus));
        } else {
            // directory: do a bulk operation
            String key = maybeAddTrailingSlash(pathToKey(path));
            String delimiter = recursive ? null : "/";
            LOG.debug("Requesting all entries under {} with delimiter '{}'", key, delimiter);
            return listing.createLocatedFileStatusIterator(listing.createFileStatusListingIterator(path, createListObjectsRequest(key, delimiter), ACCEPT_ALL, new Listing.AcceptFilesOnly(path)));
        }
    } catch (AmazonClientException e) {
        throw translateException("listFiles", path, e);
    }
}
Also used : Path(org.apache.hadoop.fs.Path) FileStatus(org.apache.hadoop.fs.FileStatus) LocatedFileStatus(org.apache.hadoop.fs.LocatedFileStatus) AmazonClientException(com.amazonaws.AmazonClientException)

Example 8 with AmazonClientException

use of com.amazonaws.AmazonClientException in project hadoop by apache.

the class S3AFileSystem method copyFile.

/**
   * Copy a single object in the bucket via a COPY operation.
   * @param srcKey source object path
   * @param dstKey destination object path
   * @param size object size
   * @throws AmazonClientException on failures inside the AWS SDK
   * @throws InterruptedIOException the operation was interrupted
   * @throws IOException Other IO problems
   */
private void copyFile(String srcKey, String dstKey, long size) throws IOException, InterruptedIOException, AmazonClientException {
    LOG.debug("copyFile {} -> {} ", srcKey, dstKey);
    try {
        ObjectMetadata srcom = getObjectMetadata(srcKey);
        ObjectMetadata dstom = cloneObjectMetadata(srcom);
        setOptionalObjectMetadata(dstom);
        CopyObjectRequest copyObjectRequest = new CopyObjectRequest(bucket, srcKey, bucket, dstKey);
        setOptionalCopyObjectRequestParameters(copyObjectRequest);
        copyObjectRequest.setCannedAccessControlList(cannedACL);
        copyObjectRequest.setNewObjectMetadata(dstom);
        ProgressListener progressListener = new ProgressListener() {

            public void progressChanged(ProgressEvent progressEvent) {
                switch(progressEvent.getEventType()) {
                    case TRANSFER_PART_COMPLETED_EVENT:
                        incrementWriteOperations();
                        break;
                    default:
                        break;
                }
            }
        };
        Copy copy = transfers.copy(copyObjectRequest);
        copy.addProgressListener(progressListener);
        try {
            copy.waitForCopyResult();
            incrementWriteOperations();
            instrumentation.filesCopied(1, size);
        } catch (InterruptedException e) {
            throw new InterruptedIOException("Interrupted copying " + srcKey + " to " + dstKey + ", cancelling");
        }
    } catch (AmazonClientException e) {
        throw translateException("copyFile(" + srcKey + ", " + dstKey + ")", srcKey, e);
    }
}
Also used : InterruptedIOException(java.io.InterruptedIOException) CopyObjectRequest(com.amazonaws.services.s3.model.CopyObjectRequest) ProgressListener(com.amazonaws.event.ProgressListener) Copy(com.amazonaws.services.s3.transfer.Copy) AmazonClientException(com.amazonaws.AmazonClientException) ProgressEvent(com.amazonaws.event.ProgressEvent) ObjectMetadata(com.amazonaws.services.s3.model.ObjectMetadata)

Example 9 with AmazonClientException

use of com.amazonaws.AmazonClientException in project hadoop by apache.

the class S3AFileSystem method initialize.

/** Called after a new FileSystem instance is constructed.
   * @param name a uri whose authority section names the host, port, etc.
   *   for this FileSystem
   * @param originalConf the configuration to use for the FS. The
   * bucket-specific options are patched over the base ones before any use is
   * made of the config.
   */
public void initialize(URI name, Configuration originalConf) throws IOException {
    uri = S3xLoginHelper.buildFSURI(name);
    // get the host; this is guaranteed to be non-null, non-empty
    bucket = name.getHost();
    // clone the configuration into one with propagated bucket options
    Configuration conf = propagateBucketOptions(originalConf, bucket);
    patchSecurityCredentialProviders(conf);
    super.initialize(name, conf);
    setConf(conf);
    try {
        instrumentation = new S3AInstrumentation(name);
        // Username is the current user at the time the FS was instantiated.
        username = UserGroupInformation.getCurrentUser().getShortUserName();
        workingDir = new Path("/user", username).makeQualified(this.uri, this.getWorkingDirectory());
        Class<? extends S3ClientFactory> s3ClientFactoryClass = conf.getClass(S3_CLIENT_FACTORY_IMPL, DEFAULT_S3_CLIENT_FACTORY_IMPL, S3ClientFactory.class);
        s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, conf).createS3Client(name, uri);
        maxKeys = intOption(conf, MAX_PAGING_KEYS, DEFAULT_MAX_PAGING_KEYS, 1);
        listing = new Listing(this);
        partSize = getMultipartSizeProperty(conf, MULTIPART_SIZE, DEFAULT_MULTIPART_SIZE);
        multiPartThreshold = getMultipartSizeProperty(conf, MIN_MULTIPART_THRESHOLD, DEFAULT_MIN_MULTIPART_THRESHOLD);
        //check but do not store the block size
        longBytesOption(conf, FS_S3A_BLOCK_SIZE, DEFAULT_BLOCKSIZE, 1);
        enableMultiObjectsDelete = conf.getBoolean(ENABLE_MULTI_DELETE, true);
        readAhead = longBytesOption(conf, READAHEAD_RANGE, DEFAULT_READAHEAD_RANGE, 0);
        storageStatistics = (S3AStorageStatistics) GlobalStorageStatistics.INSTANCE.put(S3AStorageStatistics.NAME, new GlobalStorageStatistics.StorageStatisticsProvider() {

            @Override
            public StorageStatistics provide() {
                return new S3AStorageStatistics();
            }
        });
        int maxThreads = conf.getInt(MAX_THREADS, DEFAULT_MAX_THREADS);
        if (maxThreads < 2) {
            LOG.warn(MAX_THREADS + " must be at least 2: forcing to 2.");
            maxThreads = 2;
        }
        int totalTasks = intOption(conf, MAX_TOTAL_TASKS, DEFAULT_MAX_TOTAL_TASKS, 1);
        long keepAliveTime = longOption(conf, KEEPALIVE_TIME, DEFAULT_KEEPALIVE_TIME, 0);
        boundedThreadPool = BlockingThreadPoolExecutorService.newInstance(maxThreads, maxThreads + totalTasks, keepAliveTime, TimeUnit.SECONDS, "s3a-transfer-shared");
        unboundedThreadPool = new ThreadPoolExecutor(maxThreads, Integer.MAX_VALUE, keepAliveTime, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(), BlockingThreadPoolExecutorService.newDaemonThreadFactory("s3a-transfer-unbounded"));
        initTransferManager();
        initCannedAcls(conf);
        verifyBucketExists();
        initMultipartUploads(conf);
        serverSideEncryptionAlgorithm = S3AEncryptionMethods.getMethod(conf.getTrimmed(SERVER_SIDE_ENCRYPTION_ALGORITHM));
        if (S3AEncryptionMethods.SSE_C.equals(serverSideEncryptionAlgorithm) && StringUtils.isBlank(getServerSideEncryptionKey(getConf()))) {
            throw new IOException(Constants.SSE_C_NO_KEY_ERROR);
        }
        if (S3AEncryptionMethods.SSE_S3.equals(serverSideEncryptionAlgorithm) && StringUtils.isNotBlank(getServerSideEncryptionKey(getConf()))) {
            throw new IOException(Constants.SSE_S3_WITH_KEY_ERROR);
        }
        LOG.debug("Using encryption {}", serverSideEncryptionAlgorithm);
        inputPolicy = S3AInputPolicy.getPolicy(conf.getTrimmed(INPUT_FADVISE, INPUT_FADV_NORMAL));
        blockUploadEnabled = conf.getBoolean(FAST_UPLOAD, DEFAULT_FAST_UPLOAD);
        if (blockUploadEnabled) {
            blockOutputBuffer = conf.getTrimmed(FAST_UPLOAD_BUFFER, DEFAULT_FAST_UPLOAD_BUFFER);
            partSize = ensureOutputParameterInRange(MULTIPART_SIZE, partSize);
            blockFactory = S3ADataBlocks.createFactory(this, blockOutputBuffer);
            blockOutputActiveBlocks = intOption(conf, FAST_UPLOAD_ACTIVE_BLOCKS, DEFAULT_FAST_UPLOAD_ACTIVE_BLOCKS, 1);
            LOG.debug("Using S3ABlockOutputStream with buffer = {}; block={};" + " queue limit={}", blockOutputBuffer, partSize, blockOutputActiveBlocks);
        } else {
            LOG.debug("Using S3AOutputStream");
        }
    } catch (AmazonClientException e) {
        throw translateException("initializing ", new Path(name), e);
    }
}
Also used : Path(org.apache.hadoop.fs.Path) Configuration(org.apache.hadoop.conf.Configuration) TransferManagerConfiguration(com.amazonaws.services.s3.transfer.TransferManagerConfiguration) GlobalStorageStatistics(org.apache.hadoop.fs.GlobalStorageStatistics) StorageStatistics(org.apache.hadoop.fs.StorageStatistics) AmazonClientException(com.amazonaws.AmazonClientException) PathIOException(org.apache.hadoop.fs.PathIOException) InterruptedIOException(java.io.InterruptedIOException) IOException(java.io.IOException) LinkedBlockingQueue(java.util.concurrent.LinkedBlockingQueue) ObjectListing(com.amazonaws.services.s3.model.ObjectListing) ThreadPoolExecutor(java.util.concurrent.ThreadPoolExecutor) GlobalStorageStatistics(org.apache.hadoop.fs.GlobalStorageStatistics)

Example 10 with AmazonClientException

use of com.amazonaws.AmazonClientException in project hadoop by apache.

the class S3AFileSystem method putObjectDirect.

/**
   * PUT an object directly (i.e. not via the transfer manager).
   * Byte length is calculated from the file length, or, if there is no
   * file, from the content length of the header.
   * <i>Important: this call will close any input stream in the request.</i>
   * @param putObjectRequest the request
   * @return the upload initiated
   * @throws AmazonClientException on problems
   */
public PutObjectResult putObjectDirect(PutObjectRequest putObjectRequest) throws AmazonClientException {
    long len;
    if (putObjectRequest.getFile() != null) {
        len = putObjectRequest.getFile().length();
    } else {
        len = putObjectRequest.getMetadata().getContentLength();
    }
    incrementPutStartStatistics(len);
    try {
        PutObjectResult result = s3.putObject(putObjectRequest);
        incrementPutCompletedStatistics(true, len);
        return result;
    } catch (AmazonClientException e) {
        incrementPutCompletedStatistics(false, len);
        throw e;
    }
}
Also used : PutObjectResult(com.amazonaws.services.s3.model.PutObjectResult) AmazonClientException(com.amazonaws.AmazonClientException)

Aggregations

AmazonClientException (com.amazonaws.AmazonClientException)202 IOException (java.io.IOException)70 AmazonServiceException (com.amazonaws.AmazonServiceException)32 ArrayList (java.util.ArrayList)32 ObjectMetadata (com.amazonaws.services.s3.model.ObjectMetadata)23 ObjectListing (com.amazonaws.services.s3.model.ObjectListing)19 S3ObjectSummary (com.amazonaws.services.s3.model.S3ObjectSummary)17 HashMap (java.util.HashMap)16 PutObjectRequest (com.amazonaws.services.s3.model.PutObjectRequest)14 Test (org.junit.Test)14 SienaException (siena.SienaException)12 AWSCredentials (com.amazonaws.auth.AWSCredentials)11 AmazonS3Client (com.amazonaws.services.s3.AmazonS3Client)11 GetObjectRequest (com.amazonaws.services.s3.model.GetObjectRequest)11 ListObjectsRequest (com.amazonaws.services.s3.model.ListObjectsRequest)11 AmazonDynamoDB (com.amazonaws.services.dynamodbv2.AmazonDynamoDB)10 AmazonS3Exception (com.amazonaws.services.s3.model.AmazonS3Exception)10 InterruptedIOException (java.io.InterruptedIOException)10 DeleteObjectsRequest (com.amazonaws.services.s3.model.DeleteObjectsRequest)9 File (java.io.File)9