Search in sources :

Example 6 with CloudBlobWrapper

use of org.apache.hadoop.fs.azure.StorageInterface.CloudBlobWrapper in project hadoop by apache.

the class AzureNativeFileSystemStore method updateFolderLastModifiedTime.

@Override
public void updateFolderLastModifiedTime(String key, Date lastModified, SelfRenewingLease folderLease) throws AzureException {
    try {
        checkContainer(ContainerAccessType.ReadThenWrite);
        CloudBlobWrapper blob = getBlobReference(key);
        //setLastModified function is not available in 2.0.0 version. blob.uploadProperties automatically updates last modified
        //timestamp to current time
        blob.uploadProperties(getInstrumentedContext(), folderLease);
    } catch (Exception e) {
        // Azure storage exception.
        throw new AzureException(e);
    }
}
Also used : CloudBlobWrapper(org.apache.hadoop.fs.azure.StorageInterface.CloudBlobWrapper) URISyntaxException(java.net.URISyntaxException) InvalidKeyException(java.security.InvalidKeyException) UnsupportedEncodingException(java.io.UnsupportedEncodingException) StorageException(com.microsoft.azure.storage.StorageException) IOException(java.io.IOException)

Example 7 with CloudBlobWrapper

use of org.apache.hadoop.fs.azure.StorageInterface.CloudBlobWrapper in project hadoop by apache.

the class AzureNativeFileSystemStore method acquireLease.

/**
   * Get a lease on the blob identified by key. This lease will be renewed
   * indefinitely by a background thread.
   */
@Override
public SelfRenewingLease acquireLease(String key) throws AzureException {
    LOG.debug("acquiring lease on {}", key);
    try {
        checkContainer(ContainerAccessType.ReadThenWrite);
        CloudBlobWrapper blob = getBlobReference(key);
        return blob.acquireLease();
    } catch (Exception e) {
        // Azure storage exception.
        throw new AzureException(e);
    }
}
Also used : CloudBlobWrapper(org.apache.hadoop.fs.azure.StorageInterface.CloudBlobWrapper) URISyntaxException(java.net.URISyntaxException) InvalidKeyException(java.security.InvalidKeyException) UnsupportedEncodingException(java.io.UnsupportedEncodingException) StorageException(com.microsoft.azure.storage.StorageException) IOException(java.io.IOException)

Example 8 with CloudBlobWrapper

use of org.apache.hadoop.fs.azure.StorageInterface.CloudBlobWrapper in project hadoop by apache.

the class AzureNativeFileSystemStore method storeEmptyFolder.

@Override
public void storeEmptyFolder(String key, PermissionStatus permissionStatus) throws AzureException {
    if (null == storageInteractionLayer) {
        final String errMsg = String.format("Storage session expected for URI '%s' but does not exist.", sessionUri);
        throw new AssertionError(errMsg);
    }
    // been authenticated and all access is anonymous.
    if (!isAuthenticatedAccess()) {
        // allowed to anonymous accounts.
        throw new AzureException("Uploads to to public accounts using anonymous access is prohibited.");
    }
    try {
        checkContainer(ContainerAccessType.PureWrite);
        CloudBlobWrapper blob = getBlobReference(key);
        storePermissionStatus(blob, permissionStatus);
        storeFolderAttribute(blob);
        openOutputStream(blob).close();
    } catch (StorageException e) {
        // storage exception.
        throw new AzureException(e);
    } catch (URISyntaxException e) {
        throw new AzureException(e);
    } catch (IOException e) {
        Throwable t = e.getCause();
        if (t != null && t instanceof StorageException) {
            StorageException se = (StorageException) t;
            // If we got this exception, the blob should have already been created
            if (!se.getErrorCode().equals("LeaseIdMissing")) {
                throw new AzureException(e);
            }
        } else {
            throw new AzureException(e);
        }
    }
}
Also used : CloudBlobWrapper(org.apache.hadoop.fs.azure.StorageInterface.CloudBlobWrapper) URISyntaxException(java.net.URISyntaxException) IOException(java.io.IOException) StorageException(com.microsoft.azure.storage.StorageException)

Example 9 with CloudBlobWrapper

use of org.apache.hadoop.fs.azure.StorageInterface.CloudBlobWrapper in project hadoop by apache.

the class AzureNativeFileSystemStore method retrieveMetadata.

@Override
public FileMetadata retrieveMetadata(String key) throws IOException {
    // server.
    if (null == storageInteractionLayer) {
        final String errMsg = String.format("Storage session expected for URI '%s' but does not exist.", sessionUri);
        throw new AssertionError(errMsg);
    }
    LOG.debug("Retrieving metadata for {}", key);
    try {
        if (checkContainer(ContainerAccessType.PureRead) == ContainerState.DoesntExist) {
            // return null now.
            return null;
        }
        // key is a container.
        if (key.equals("/")) {
            // Set the modification time for root to zero.
            return new FileMetadata(key, 0, defaultPermissionNoBlobMetadata(), BlobMaterialization.Implicit);
        }
        CloudBlobWrapper blob = getBlobReference(key);
        // exists.
        if (null != blob && blob.exists(getInstrumentedContext())) {
            LOG.debug("Found {} as an explicit blob. Checking if it's a file or folder.", key);
            // The blob exists, so capture the metadata from the blob
            // properties.
            blob.downloadAttributes(getInstrumentedContext());
            BlobProperties properties = blob.getProperties();
            if (retrieveFolderAttribute(blob)) {
                LOG.debug("{} is a folder blob.", key);
                return new FileMetadata(key, properties.getLastModified().getTime(), getPermissionStatus(blob), BlobMaterialization.Explicit);
            } else {
                LOG.debug("{} is a normal blob.", key);
                return new FileMetadata(// Always return denormalized key with metadata.
                key, getDataLength(blob, properties), properties.getLastModified().getTime(), getPermissionStatus(blob));
            }
        }
        // There is no file with that key name, but maybe it is a folder.
        // Query the underlying folder/container to list the blobs stored
        // there under that key.
        //
        Iterable<ListBlobItem> objects = listRootBlobs(key, true, EnumSet.of(BlobListingDetails.METADATA), null, getInstrumentedContext());
        // Check if the directory/container has the blob items.
        for (ListBlobItem blobItem : objects) {
            if (blobItem instanceof CloudBlockBlobWrapper || blobItem instanceof CloudPageBlobWrapper) {
                LOG.debug("Found blob as a directory-using this file under it to infer its properties {}", blobItem.getUri());
                blob = (CloudBlobWrapper) blobItem;
                // The key specifies a directory. Create a FileMetadata object which
                // specifies as such.
                BlobProperties properties = blob.getProperties();
                return new FileMetadata(key, properties.getLastModified().getTime(), getPermissionStatus(blob), BlobMaterialization.Implicit);
            }
        }
        // Return to caller with a null metadata object.
        return null;
    } catch (Exception e) {
        // Re-throw the exception as an Azure storage exception.
        throw new AzureException(e);
    }
}
Also used : CloudBlobWrapper(org.apache.hadoop.fs.azure.StorageInterface.CloudBlobWrapper) ListBlobItem(com.microsoft.azure.storage.blob.ListBlobItem) CloudPageBlobWrapper(org.apache.hadoop.fs.azure.StorageInterface.CloudPageBlobWrapper) CloudBlockBlobWrapper(org.apache.hadoop.fs.azure.StorageInterface.CloudBlockBlobWrapper) BlobProperties(com.microsoft.azure.storage.blob.BlobProperties) URISyntaxException(java.net.URISyntaxException) InvalidKeyException(java.security.InvalidKeyException) UnsupportedEncodingException(java.io.UnsupportedEncodingException) StorageException(com.microsoft.azure.storage.StorageException) IOException(java.io.IOException)

Example 10 with CloudBlobWrapper

use of org.apache.hadoop.fs.azure.StorageInterface.CloudBlobWrapper in project hadoop by apache.

the class AzureNativeFileSystemStore method buildUpList.

/**
   * Build up a metadata list of blobs in an Azure blob directory. This method
   * uses a in-order first traversal of blob directory structures to maintain
   * the sorted order of the blob names.
   * 
   * @param aCloudBlobDirectory Azure blob directory
   * @param aFileMetadataList a list of file metadata objects for each
   *                          non-directory blob.
   * @param maxListingCount maximum length of the built up list.
   */
private void buildUpList(CloudBlobDirectoryWrapper aCloudBlobDirectory, ArrayList<FileMetadata> aFileMetadataList, final int maxListingCount, final int maxListingDepth) throws Exception {
    // Push the blob directory onto the stack.
    //
    AzureLinkedStack<Iterator<ListBlobItem>> dirIteratorStack = new AzureLinkedStack<Iterator<ListBlobItem>>();
    Iterable<ListBlobItem> blobItems = aCloudBlobDirectory.listBlobs(null, false, EnumSet.of(BlobListingDetails.METADATA), null, getInstrumentedContext());
    Iterator<ListBlobItem> blobItemIterator = blobItems.iterator();
    if (0 == maxListingDepth || 0 == maxListingCount) {
        // immediately.
        return;
    }
    // The directory listing depth is unbounded if the maximum listing depth
    // is negative.
    final boolean isUnboundedDepth = (maxListingDepth < 0);
    // Reset the current directory listing depth.
    int listingDepth = 1;
    // metadata list is less than the max listing count.
    while (null != blobItemIterator && (maxListingCount <= 0 || aFileMetadataList.size() < maxListingCount)) {
        while (blobItemIterator.hasNext()) {
            //
            if (0 < maxListingCount && aFileMetadataList.size() >= maxListingCount) {
                break;
            }
            ListBlobItem blobItem = blobItemIterator.next();
            //
            if (blobItem instanceof CloudBlockBlobWrapper || blobItem instanceof CloudPageBlobWrapper) {
                String blobKey = null;
                CloudBlobWrapper blob = (CloudBlobWrapper) blobItem;
                BlobProperties properties = blob.getProperties();
                // Determine format of the blob name depending on whether an absolute
                // path is being used or not.
                blobKey = normalizeKey(blob);
                FileMetadata metadata;
                if (retrieveFolderAttribute(blob)) {
                    metadata = new FileMetadata(blobKey, properties.getLastModified().getTime(), getPermissionStatus(blob), BlobMaterialization.Explicit);
                } else {
                    metadata = new FileMetadata(blobKey, getDataLength(blob, properties), properties.getLastModified().getTime(), getPermissionStatus(blob));
                }
                // Add the directory metadata to the list only if it's not already
                // there.
                FileMetadata existing = getFileMetadataInList(aFileMetadataList, blobKey);
                if (existing != null) {
                    aFileMetadataList.remove(existing);
                }
                aFileMetadataList.add(metadata);
            } else if (blobItem instanceof CloudBlobDirectoryWrapper) {
                CloudBlobDirectoryWrapper directory = (CloudBlobDirectoryWrapper) blobItem;
                // directory.
                if (isUnboundedDepth || maxListingDepth > listingDepth) {
                    // Push the current directory on the stack and increment the listing
                    // depth.
                    dirIteratorStack.push(blobItemIterator);
                    ++listingDepth;
                    // The current blob item represents the new directory. Get
                    // an iterator for this directory and continue by iterating through
                    // this directory.
                    blobItems = directory.listBlobs(null, false, EnumSet.noneOf(BlobListingDetails.class), null, getInstrumentedContext());
                    blobItemIterator = blobItems.iterator();
                } else {
                    // Determine format of directory name depending on whether an
                    // absolute path is being used or not.
                    String dirKey = normalizeKey(directory);
                    if (getFileMetadataInList(aFileMetadataList, dirKey) == null) {
                        // Reached the targeted listing depth. Return metadata for the
                        // directory using default permissions.
                        //
                        // Note: Something smarter should be done about permissions. Maybe
                        // inherit the permissions of the first non-directory blob.
                        // Also, getting a proper value for last-modified is tricky.
                        //
                        FileMetadata directoryMetadata = new FileMetadata(dirKey, 0, defaultPermissionNoBlobMetadata(), BlobMaterialization.Implicit);
                        // Add the directory metadata to the list.
                        aFileMetadataList.add(directoryMetadata);
                    }
                }
            }
        }
        //
        if (dirIteratorStack.isEmpty()) {
            blobItemIterator = null;
        } else {
            // Pop the next directory item from the stack and decrement the
            // depth.
            blobItemIterator = dirIteratorStack.pop();
            --listingDepth;
            // Assertion: Listing depth should not be less than zero.
            if (listingDepth < 0) {
                throw new AssertionError("Non-negative listing depth expected");
            }
        }
    }
}
Also used : BlobListingDetails(com.microsoft.azure.storage.blob.BlobListingDetails) ListBlobItem(com.microsoft.azure.storage.blob.ListBlobItem) CloudBlockBlobWrapper(org.apache.hadoop.fs.azure.StorageInterface.CloudBlockBlobWrapper) CloudBlobWrapper(org.apache.hadoop.fs.azure.StorageInterface.CloudBlobWrapper) CloudPageBlobWrapper(org.apache.hadoop.fs.azure.StorageInterface.CloudPageBlobWrapper) BlobProperties(com.microsoft.azure.storage.blob.BlobProperties) Iterator(java.util.Iterator) CloudBlobDirectoryWrapper(org.apache.hadoop.fs.azure.StorageInterface.CloudBlobDirectoryWrapper)

Aggregations

CloudBlobWrapper (org.apache.hadoop.fs.azure.StorageInterface.CloudBlobWrapper)15 StorageException (com.microsoft.azure.storage.StorageException)13 IOException (java.io.IOException)13 URISyntaxException (java.net.URISyntaxException)13 UnsupportedEncodingException (java.io.UnsupportedEncodingException)11 InvalidKeyException (java.security.InvalidKeyException)11 BlobProperties (com.microsoft.azure.storage.blob.BlobProperties)3 ListBlobItem (com.microsoft.azure.storage.blob.ListBlobItem)3 BufferedInputStream (java.io.BufferedInputStream)3 DataInputStream (java.io.DataInputStream)3 DataOutputStream (java.io.DataOutputStream)3 CloudBlockBlobWrapper (org.apache.hadoop.fs.azure.StorageInterface.CloudBlockBlobWrapper)3 CloudPageBlobWrapper (org.apache.hadoop.fs.azure.StorageInterface.CloudPageBlobWrapper)3 InputStream (java.io.InputStream)2 OutputStream (java.io.OutputStream)2 CloudBlobDirectoryWrapper (org.apache.hadoop.fs.azure.StorageInterface.CloudBlobDirectoryWrapper)2 RetryExponentialRetry (com.microsoft.azure.storage.RetryExponentialRetry)1 BlobListingDetails (com.microsoft.azure.storage.blob.BlobListingDetails)1 BlobRequestOptions (com.microsoft.azure.storage.blob.BlobRequestOptions)1 ArrayList (java.util.ArrayList)1