Search in sources :

Example 31 with AuthorizationException

use of org.apache.storm.generated.AuthorizationException in project storm by apache.

the class BlobStoreAclHandler method validateSettableACLs.

public static void validateSettableACLs(String key, List<AccessControl> acls) throws AuthorizationException {
    Set<String> aclUsers = new HashSet<>();
    List<String> duplicateUsers = new ArrayList<>();
    for (AccessControl acl : acls) {
        String aclUser = acl.get_name();
        if (!StringUtils.isEmpty(aclUser) && !aclUsers.add(aclUser)) {
            LOG.error("'{}' user can't appear more than once in the ACLs", aclUser);
            duplicateUsers.add(aclUser);
        }
    }
    if (duplicateUsers.size() > 0) {
        String errorMessage = "user " + Arrays.toString(duplicateUsers.toArray()) + " can't appear more than once in the ACLs for key [" + key + "].";
        throw new AuthorizationException(errorMessage);
    }
}
Also used : AuthorizationException(org.apache.storm.generated.AuthorizationException) ArrayList(java.util.ArrayList) AccessControl(org.apache.storm.generated.AccessControl) HashSet(java.util.HashSet)

Example 32 with AuthorizationException

use of org.apache.storm.generated.AuthorizationException in project storm by apache.

the class BlobStoreAclHandler method hasPermissions.

/**
     * Validates if the user has at least the set of permissions
     * mentioned in the mask.
     * @param acl ACL for the key.
     * @param mask mask holds the cumulative value of
     * READ = 1, WRITE = 2 or ADMIN = 4 permissions.
     * mask = 1 implies READ privilege.
     * mask = 5 implies READ and ADMIN privileges.
     * @param who Is the user against whom the permissions
     * are validated for a key using the ACL and the mask.
     * @param key Key used to identify the blob.
     * @throws AuthorizationException
     */
public void hasPermissions(List<AccessControl> acl, int mask, Subject who, String key) throws AuthorizationException {
    if (!doAclValidation) {
        return;
    }
    Set<String> user = constructUserFromPrincipals(who);
    LOG.debug("user {}", user);
    if (checkForValidUsers(who, mask)) {
        return;
    }
    for (AccessControl ac : acl) {
        int allowed = getAllowed(ac, user);
        mask = ~allowed & mask;
        LOG.debug(" user: {} allowed: {} disallowed: {} key: {}", user, allowed, mask, key);
    }
    if (mask == 0) {
        return;
    }
    throw new AuthorizationException(user + " does not have " + namedPerms(mask) + " access to " + key);
}
Also used : AuthorizationException(org.apache.storm.generated.AuthorizationException) AccessControl(org.apache.storm.generated.AccessControl)

Example 33 with AuthorizationException

use of org.apache.storm.generated.AuthorizationException in project storm by apache.

the class BlobStoreUtils method downloadMissingBlob.

// Download missing blobs from potential nimbodes
public static boolean downloadMissingBlob(Map<String, Object> conf, BlobStore blobStore, String key, Set<NimbusInfo> nimbusInfos) throws TTransportException {
    ReadableBlobMeta rbm;
    ClientBlobStore remoteBlobStore;
    InputStreamWithMeta in;
    boolean isSuccess = false;
    LOG.debug("Download blob NimbusInfos {}", nimbusInfos);
    for (NimbusInfo nimbusInfo : nimbusInfos) {
        if (isSuccess) {
            break;
        }
        LOG.debug("Download blob key: {}, NimbusInfo {}", key, nimbusInfo);
        try (NimbusClient client = new NimbusClient(conf, nimbusInfo.getHost(), nimbusInfo.getPort(), null)) {
            rbm = client.getClient().getBlobMeta(key);
            remoteBlobStore = new NimbusBlobStore();
            remoteBlobStore.setClient(conf, client);
            in = remoteBlobStore.getBlob(key);
            blobStore.createBlob(key, in, rbm.get_settable(), getNimbusSubject());
            // if key already exists while creating the blob else update it
            Iterator<String> keyIterator = blobStore.listKeys();
            while (keyIterator.hasNext()) {
                if (keyIterator.next().equals(key)) {
                    LOG.debug("Success creating key, {}", key);
                    isSuccess = true;
                    break;
                }
            }
        } catch (IOException | AuthorizationException exception) {
            throw new RuntimeException(exception);
        } catch (KeyAlreadyExistsException kae) {
            LOG.info("KeyAlreadyExistsException Key: {} {}", key, kae);
        } catch (KeyNotFoundException knf) {
            // Catching and logging KeyNotFoundException because, if
            // there is a subsequent update and delete, the non-leader
            // nimbodes might throw an exception.
            LOG.info("KeyNotFoundException Key: {} {}", key, knf);
        } catch (Exception exp) {
            // Logging an exception while client is connecting
            LOG.error("Exception {}", exp);
        }
    }
    if (!isSuccess) {
        LOG.error("Could not download the blob with key: {}", key);
    }
    return isSuccess;
}
Also used : AuthorizationException(org.apache.storm.generated.AuthorizationException) ReadableBlobMeta(org.apache.storm.generated.ReadableBlobMeta) NimbusClient(org.apache.storm.utils.NimbusClient) IOException(java.io.IOException) KeyAlreadyExistsException(org.apache.storm.generated.KeyAlreadyExistsException) TTransportException(org.apache.thrift.transport.TTransportException) KeeperException(org.apache.zookeeper.KeeperException) KeyAlreadyExistsException(org.apache.storm.generated.KeyAlreadyExistsException) IOException(java.io.IOException) AuthorizationException(org.apache.storm.generated.AuthorizationException) KeyNotFoundException(org.apache.storm.generated.KeyNotFoundException) NoNodeException(org.apache.zookeeper.KeeperException.NoNodeException) NimbusInfo(org.apache.storm.nimbus.NimbusInfo) KeyNotFoundException(org.apache.storm.generated.KeyNotFoundException)

Example 34 with AuthorizationException

use of org.apache.storm.generated.AuthorizationException in project storm by apache.

the class BlobStoreUtils method downloadUpdatedBlob.

// Download updated blobs from potential nimbodes
public static boolean downloadUpdatedBlob(Map<String, Object> conf, BlobStore blobStore, String key, Set<NimbusInfo> nimbusInfos) throws TTransportException {
    ClientBlobStore remoteBlobStore;
    InputStreamWithMeta in;
    AtomicOutputStream out;
    boolean isSuccess = false;
    LOG.debug("Download blob NimbusInfos {}", nimbusInfos);
    for (NimbusInfo nimbusInfo : nimbusInfos) {
        if (isSuccess) {
            break;
        }
        try (NimbusClient client = new NimbusClient(conf, nimbusInfo.getHost(), nimbusInfo.getPort(), null)) {
            remoteBlobStore = new NimbusBlobStore();
            remoteBlobStore.setClient(conf, client);
            in = remoteBlobStore.getBlob(key);
            out = blobStore.updateBlob(key, getNimbusSubject());
            byte[] buffer = new byte[2048];
            int len = 0;
            while ((len = in.read(buffer)) > 0) {
                out.write(buffer, 0, len);
            }
            if (out != null) {
                out.close();
            }
            isSuccess = true;
        } catch (IOException | AuthorizationException exception) {
            throw new RuntimeException(exception);
        } catch (KeyNotFoundException knf) {
            // Catching and logging KeyNotFoundException because, if
            // there is a subsequent update and delete, the non-leader
            // nimbodes might throw an exception.
            LOG.info("KeyNotFoundException {}", knf);
        } catch (Exception exp) {
            // Logging an exception while client is connecting
            LOG.error("Exception {}", exp);
        }
    }
    if (!isSuccess) {
        LOG.error("Could not update the blob with key: {}", key);
    }
    return isSuccess;
}
Also used : AuthorizationException(org.apache.storm.generated.AuthorizationException) NimbusClient(org.apache.storm.utils.NimbusClient) IOException(java.io.IOException) TTransportException(org.apache.thrift.transport.TTransportException) KeeperException(org.apache.zookeeper.KeeperException) KeyAlreadyExistsException(org.apache.storm.generated.KeyAlreadyExistsException) IOException(java.io.IOException) AuthorizationException(org.apache.storm.generated.AuthorizationException) KeyNotFoundException(org.apache.storm.generated.KeyNotFoundException) NoNodeException(org.apache.zookeeper.KeeperException.NoNodeException) NimbusInfo(org.apache.storm.nimbus.NimbusInfo) KeyNotFoundException(org.apache.storm.generated.KeyNotFoundException)

Example 35 with AuthorizationException

use of org.apache.storm.generated.AuthorizationException in project storm by apache.

the class Zookeeper method leaderLatchListenerImpl.

// Leader latch listener that will be invoked when we either gain or lose leadership
public static LeaderLatchListener leaderLatchListenerImpl(final Map conf, final CuratorFramework zk, final BlobStore blobStore, final LeaderLatch leaderLatch) throws UnknownHostException {
    final String hostName = InetAddress.getLocalHost().getCanonicalHostName();
    return new LeaderLatchListener() {

        final String STORM_JAR_SUFFIX = "-stormjar.jar";

        final String STORM_CODE_SUFFIX = "-stormcode.ser";

        final String STORM_CONF_SUFFIX = "-stormconf.ser";

        @Override
        public void isLeader() {
            Set<String> activeTopologyIds = new TreeSet<>(Zookeeper.getChildren(zk, conf.get(Config.STORM_ZOOKEEPER_ROOT) + ClusterUtils.STORMS_SUBTREE, false));
            Set<String> activeTopologyBlobKeys = populateTopologyBlobKeys(activeTopologyIds);
            Set<String> activeTopologyCodeKeys = filterTopologyCodeKeys(activeTopologyBlobKeys);
            Set<String> allLocalBlobKeys = Sets.newHashSet(blobStore.listKeys());
            Set<String> allLocalTopologyBlobKeys = filterTopologyBlobKeys(allLocalBlobKeys);
            // this finds all active topologies blob keys from all local topology blob keys
            Sets.SetView<String> diffTopology = Sets.difference(activeTopologyBlobKeys, allLocalTopologyBlobKeys);
            LOG.info("active-topology-blobs [{}] local-topology-blobs [{}] diff-topology-blobs [{}]", generateJoinedString(activeTopologyIds), generateJoinedString(allLocalTopologyBlobKeys), generateJoinedString(diffTopology));
            if (diffTopology.isEmpty()) {
                Set<String> activeTopologyDependencies = getTopologyDependencyKeys(activeTopologyCodeKeys);
                // this finds all dependency blob keys from active topologies from all local blob keys
                Sets.SetView<String> diffDependencies = Sets.difference(activeTopologyDependencies, allLocalBlobKeys);
                LOG.info("active-topology-dependencies [{}] local-blobs [{}] diff-topology-dependencies [{}]", generateJoinedString(activeTopologyDependencies), generateJoinedString(allLocalBlobKeys), generateJoinedString(diffDependencies));
                if (diffDependencies.isEmpty()) {
                    LOG.info("Accepting leadership, all active topologies and corresponding dependencies found locally.");
                } else {
                    LOG.info("Code for all active topologies is available locally, but some dependencies are not found locally, giving up leadership.");
                    closeLatch();
                }
            } else {
                LOG.info("code for all active topologies not available locally, giving up leadership.");
                closeLatch();
            }
        }

        @Override
        public void notLeader() {
            LOG.info("{} lost leadership.", hostName);
        }

        private String generateJoinedString(Set<String> activeTopologyIds) {
            return Joiner.on(",").join(activeTopologyIds);
        }

        private Set<String> populateTopologyBlobKeys(Set<String> activeTopologyIds) {
            Set<String> activeTopologyBlobKeys = new TreeSet<>();
            for (String activeTopologyId : activeTopologyIds) {
                activeTopologyBlobKeys.add(activeTopologyId + STORM_JAR_SUFFIX);
                activeTopologyBlobKeys.add(activeTopologyId + STORM_CODE_SUFFIX);
                activeTopologyBlobKeys.add(activeTopologyId + STORM_CONF_SUFFIX);
            }
            return activeTopologyBlobKeys;
        }

        private Set<String> filterTopologyBlobKeys(Set<String> blobKeys) {
            Set<String> topologyBlobKeys = new HashSet<>();
            for (String blobKey : blobKeys) {
                if (blobKey.endsWith(STORM_JAR_SUFFIX) || blobKey.endsWith(STORM_CODE_SUFFIX) || blobKey.endsWith(STORM_CONF_SUFFIX)) {
                    topologyBlobKeys.add(blobKey);
                }
            }
            return topologyBlobKeys;
        }

        private Set<String> filterTopologyCodeKeys(Set<String> blobKeys) {
            Set<String> topologyCodeKeys = new HashSet<>();
            for (String blobKey : blobKeys) {
                if (blobKey.endsWith(STORM_CODE_SUFFIX)) {
                    topologyCodeKeys.add(blobKey);
                }
            }
            return topologyCodeKeys;
        }

        private Set<String> getTopologyDependencyKeys(Set<String> activeTopologyCodeKeys) {
            Set<String> activeTopologyDependencies = new TreeSet<>();
            Subject subject = ReqContext.context().subject();
            for (String activeTopologyCodeKey : activeTopologyCodeKeys) {
                try {
                    InputStreamWithMeta blob = blobStore.getBlob(activeTopologyCodeKey, subject);
                    byte[] blobContent = IOUtils.readFully(blob, new Long(blob.getFileLength()).intValue());
                    StormTopology stormCode = Utils.deserialize(blobContent, StormTopology.class);
                    if (stormCode.is_set_dependency_jars()) {
                        activeTopologyDependencies.addAll(stormCode.get_dependency_jars());
                    }
                    if (stormCode.is_set_dependency_artifacts()) {
                        activeTopologyDependencies.addAll(stormCode.get_dependency_artifacts());
                    }
                } catch (AuthorizationException | KeyNotFoundException | IOException e) {
                    LOG.error("Exception occurs while reading blob for key: " + activeTopologyCodeKey + ", exception: " + e, e);
                    throw new RuntimeException("Exception occurs while reading blob for key: " + activeTopologyCodeKey + ", exception: " + e, e);
                }
            }
            return activeTopologyDependencies;
        }

        private void closeLatch() {
            try {
                leaderLatch.close();
            } catch (IOException e) {
                throw new RuntimeException(e);
            }
        }
    };
}
Also used : AuthorizationException(org.apache.storm.generated.AuthorizationException) StormTopology(org.apache.storm.generated.StormTopology) IOException(java.io.IOException) Subject(javax.security.auth.Subject) InputStreamWithMeta(org.apache.storm.blobstore.InputStreamWithMeta) Sets(com.google.common.collect.Sets) LeaderLatchListener(org.apache.curator.framework.recipes.leader.LeaderLatchListener) KeyNotFoundException(org.apache.storm.generated.KeyNotFoundException)

Aggregations

AuthorizationException (org.apache.storm.generated.AuthorizationException)36 KeyNotFoundException (org.apache.storm.generated.KeyNotFoundException)26 IOException (java.io.IOException)25 KeyAlreadyExistsException (org.apache.storm.generated.KeyAlreadyExistsException)21 TException (org.apache.thrift.TException)21 AlreadyAliveException (org.apache.storm.generated.AlreadyAliveException)20 InvalidTopologyException (org.apache.storm.generated.InvalidTopologyException)20 InterruptedIOException (java.io.InterruptedIOException)18 BindException (java.net.BindException)18 NotAliveException (org.apache.storm.generated.NotAliveException)18 ArrayList (java.util.ArrayList)10 HashMap (java.util.HashMap)10 List (java.util.List)8 Map (java.util.Map)8 IStormClusterState (org.apache.storm.cluster.IStormClusterState)7 ImmutableMap (com.google.common.collect.ImmutableMap)4 File (java.io.File)4 NodeInfo (org.apache.storm.generated.NodeInfo)4 BufferInputStream (org.apache.storm.utils.BufferInputStream)4 TimeCacheMap (org.apache.storm.utils.TimeCacheMap)4