Search in sources :

Example 71 with ACL

use of org.apache.zookeeper.data.ACL in project zookeeper by apache.

the class SaslAuthTest method testInvalidSaslIds.

@Test
public void testInvalidSaslIds() throws Exception {
    ZooKeeper zk = createClient();
    List<String> invalidIds = new ArrayList<String>();
    invalidIds.add("user@KERB.REALM/server.com");
    invalidIds.add("user@KERB.REALM1@KERB.REALM2");
    int i = 0;
    for (String invalidId : invalidIds) {
        List<ACL> aclList = new ArrayList<ACL>();
        try {
            ACL acl = new ACL(0, new Id("sasl", invalidId));
            aclList.add(acl);
            zk.create("/invalid" + i, null, aclList, CreateMode.PERSISTENT);
            Assert.fail("SASLAuthenticationProvider.isValid() failed to catch invalid Id.");
        } catch (KeeperException.InvalidACLException e) {
        // ok.
        } finally {
            i++;
        }
    }
}
Also used : ZooKeeper(org.apache.zookeeper.ZooKeeper) TestableZooKeeper(org.apache.zookeeper.TestableZooKeeper) ArrayList(java.util.ArrayList) ACL(org.apache.zookeeper.data.ACL) Id(org.apache.zookeeper.data.Id) KeeperException(org.apache.zookeeper.KeeperException) Test(org.junit.Test)

Example 72 with ACL

use of org.apache.zookeeper.data.ACL in project storm by apache.

the class SupervisorUtils method supervisorZkAcls.

static List<ACL> supervisorZkAcls() {
    final List<ACL> acls = new ArrayList<>();
    acls.add(ZooDefs.Ids.CREATOR_ALL_ACL.get(0));
    acls.add(new ACL((ZooDefs.Perms.READ ^ ZooDefs.Perms.CREATE), ZooDefs.Ids.ANYONE_ID_UNSAFE));
    return acls;
}
Also used : ArrayList(java.util.ArrayList) ACL(org.apache.zookeeper.data.ACL)

Example 73 with ACL

use of org.apache.zookeeper.data.ACL in project storm by apache.

the class Worker method start.

public void start() throws Exception {
    LOG.info("Launching worker for {} on {}:{} with id {} and conf {}", topologyId, assignmentId, port, workerId, conf);
    // if ConfigUtils.isLocalMode(conf) returns false then it is in distributed mode.
    if (!ConfigUtils.isLocalMode(conf)) {
        // Distributed mode
        SysOutOverSLF4J.sendSystemOutAndErrToSLF4J();
        String pid = Utils.processPid();
        FileUtils.touch(new File(ConfigUtils.workerPidPath(conf, workerId, pid)));
        FileUtils.writeStringToFile(new File(ConfigUtils.workerArtifactsPidPath(conf, topologyId, port)), pid, Charset.forName("UTF-8"));
    }
    final Map topologyConf = ConfigUtils.overrideLoginConfigWithSystemProperty(ConfigUtils.readSupervisorStormConf(conf, topologyId));
    List<ACL> acls = Utils.getWorkerACL(topologyConf);
    IStateStorage stateStorage = ClusterUtils.mkStateStorage(conf, topologyConf, acls, new ClusterStateContext(DaemonType.WORKER));
    IStormClusterState stormClusterState = ClusterUtils.mkStormClusterState(stateStorage, acls, new ClusterStateContext());
    Credentials initialCredentials = stormClusterState.credentials(topologyId, null);
    Map<String, String> initCreds = new HashMap<>();
    if (initialCredentials != null) {
        initCreds.putAll(initialCredentials.get_creds());
    }
    autoCreds = AuthUtils.GetAutoCredentials(topologyConf);
    subject = AuthUtils.populateSubject(null, autoCreds, initCreds);
    Subject.doAs(subject, new PrivilegedExceptionAction<Object>() {

        @Override
        public Object run() throws Exception {
            workerState = new WorkerState(conf, context, topologyId, assignmentId, port, workerId, topologyConf, stateStorage, stormClusterState);
            // Heartbeat here so that worker process dies if this fails
            // it's important that worker heartbeat to supervisor ASAP so that supervisor knows
            // that worker is running and moves on
            doHeartBeat();
            executorsAtom = new AtomicReference<>(null);
            // launch heartbeat threads immediately so that slow-loading tasks don't cause the worker to timeout
            // to the supervisor
            workerState.heartbeatTimer.scheduleRecurring(0, (Integer) conf.get(Config.WORKER_HEARTBEAT_FREQUENCY_SECS), () -> {
                try {
                    doHeartBeat();
                } catch (IOException e) {
                    throw new RuntimeException(e);
                }
            });
            workerState.executorHeartbeatTimer.scheduleRecurring(0, (Integer) conf.get(Config.WORKER_HEARTBEAT_FREQUENCY_SECS), Worker.this::doExecutorHeartbeats);
            workerState.registerCallbacks();
            workerState.refreshConnections(null);
            workerState.activateWorkerWhenAllConnectionsReady();
            workerState.refreshStormActive(null);
            workerState.runWorkerStartHooks();
            List<IRunningExecutor> newExecutors = new ArrayList<IRunningExecutor>();
            for (List<Long> e : workerState.getExecutors()) {
                if (ConfigUtils.isLocalMode(topologyConf)) {
                    newExecutors.add(LocalExecutor.mkExecutor(workerState, e, initCreds).execute());
                } else {
                    newExecutors.add(Executor.mkExecutor(workerState, e, initCreds).execute());
                }
            }
            executorsAtom.set(newExecutors);
            EventHandler<Object> tupleHandler = (packets, seqId, batchEnd) -> workerState.sendTuplesToRemoteWorker((HashMap<Integer, ArrayList<TaskMessage>>) packets, seqId, batchEnd);
            // This thread will publish the messages destined for remote tasks to remote connections
            transferThread = Utils.asyncLoop(() -> {
                workerState.transferQueue.consumeBatchWhenAvailable(tupleHandler);
                return 0L;
            });
            DisruptorBackpressureCallback disruptorBackpressureHandler = mkDisruptorBackpressureHandler(workerState);
            workerState.transferQueue.registerBackpressureCallback(disruptorBackpressureHandler);
            workerState.transferQueue.setEnableBackpressure((Boolean) topologyConf.get(Config.TOPOLOGY_BACKPRESSURE_ENABLE));
            workerState.transferQueue.setHighWaterMark(Utils.getDouble(topologyConf.get(Config.BACKPRESSURE_DISRUPTOR_HIGH_WATERMARK)));
            workerState.transferQueue.setLowWaterMark(Utils.getDouble(topologyConf.get(Config.BACKPRESSURE_DISRUPTOR_LOW_WATERMARK)));
            WorkerBackpressureCallback backpressureCallback = mkBackpressureHandler();
            backpressureThread = new WorkerBackpressureThread(workerState.backpressureTrigger, workerState, backpressureCallback);
            if ((Boolean) topologyConf.get(Config.TOPOLOGY_BACKPRESSURE_ENABLE)) {
                backpressureThread.start();
                stormClusterState.topologyBackpressure(topologyId, workerState::refreshThrottle);
                int pollingSecs = Utils.getInt(topologyConf.get(Config.TASK_BACKPRESSURE_POLL_SECS));
                workerState.refreshBackpressureTimer.scheduleRecurring(0, pollingSecs, workerState::refreshThrottle);
            }
            credentialsAtom = new AtomicReference<Credentials>(initialCredentials);
            establishLogSettingCallback();
            workerState.stormClusterState.credentials(topologyId, Worker.this::checkCredentialsChanged);
            workerState.refreshCredentialsTimer.scheduleRecurring(0, (Integer) conf.get(Config.TASK_CREDENTIALS_POLL_SECS), new Runnable() {

                @Override
                public void run() {
                    checkCredentialsChanged();
                    if ((Boolean) topologyConf.get(Config.TOPOLOGY_BACKPRESSURE_ENABLE)) {
                        checkThrottleChanged();
                    }
                }
            });
            // The jitter allows the clients to get the data at different times, and avoids thundering herd
            if (!(Boolean) topologyConf.get(Config.TOPOLOGY_DISABLE_LOADAWARE_MESSAGING)) {
                workerState.refreshLoadTimer.scheduleRecurringWithJitter(0, 1, 500, workerState::refreshLoad);
            }
            workerState.refreshConnectionsTimer.scheduleRecurring(0, (Integer) conf.get(Config.TASK_REFRESH_POLL_SECS), workerState::refreshConnections);
            workerState.resetLogLevelsTimer.scheduleRecurring(0, (Integer) conf.get(Config.WORKER_LOG_LEVEL_RESET_POLL_SECS), logConfigManager::resetLogLevels);
            workerState.refreshActiveTimer.scheduleRecurring(0, (Integer) conf.get(Config.TASK_REFRESH_POLL_SECS), workerState::refreshStormActive);
            LOG.info("Worker has topology config {}", Utils.redactValue(topologyConf, Config.STORM_ZOOKEEPER_TOPOLOGY_AUTH_PAYLOAD));
            LOG.info("Worker {} for storm {} on {}:{}  has finished loading", workerId, topologyId, assignmentId, port);
            return this;
        }

        ;
    });
}
Also used : WorkerBackpressureCallback(org.apache.storm.utils.WorkerBackpressureCallback) HashMap(java.util.HashMap) EventHandler(com.lmax.disruptor.EventHandler) IRunningExecutor(org.apache.storm.executor.IRunningExecutor) List(java.util.List) ArrayList(java.util.ArrayList) IStormClusterState(org.apache.storm.cluster.IStormClusterState) DisruptorBackpressureCallback(org.apache.storm.utils.DisruptorBackpressureCallback) ACL(org.apache.zookeeper.data.ACL) AtomicReference(java.util.concurrent.atomic.AtomicReference) IOException(java.io.IOException) IOException(java.io.IOException) WorkerBackpressureThread(org.apache.storm.utils.WorkerBackpressureThread) File(java.io.File) Map(java.util.Map) HashMap(java.util.HashMap) Credentials(org.apache.storm.generated.Credentials) IAutoCredentials(org.apache.storm.security.auth.IAutoCredentials) IStateStorage(org.apache.storm.cluster.IStateStorage) ClusterStateContext(org.apache.storm.cluster.ClusterStateContext) TaskMessage(org.apache.storm.messaging.TaskMessage)

Example 74 with ACL

use of org.apache.zookeeper.data.ACL in project hbase by apache.

the class TestZKUtil method testUnsecure.

@Test
public void testUnsecure() throws ZooKeeperConnectionException, IOException {
    Configuration conf = HBaseConfiguration.create();
    conf.set(Superusers.SUPERUSER_CONF_KEY, "user1");
    String node = "/hbase/testUnsecure";
    ZooKeeperWatcher watcher = new ZooKeeperWatcher(conf, node, null, false);
    List<ACL> aclList = ZKUtil.createACL(watcher, node, false);
    Assert.assertEquals(aclList.size(), 1);
    Assert.assertTrue(aclList.contains(Ids.OPEN_ACL_UNSAFE.iterator().next()));
}
Also used : HBaseConfiguration(org.apache.hadoop.hbase.HBaseConfiguration) Configuration(org.apache.hadoop.conf.Configuration) ACL(org.apache.zookeeper.data.ACL) Test(org.junit.Test)

Example 75 with ACL

use of org.apache.zookeeper.data.ACL in project hbase by apache.

the class TestZKUtil method testSecuritySingleSuperuser.

@Test
public void testSecuritySingleSuperuser() throws ZooKeeperConnectionException, IOException {
    Configuration conf = HBaseConfiguration.create();
    conf.set(Superusers.SUPERUSER_CONF_KEY, "user1");
    String node = "/hbase/testSecuritySingleSuperuser";
    ZooKeeperWatcher watcher = new ZooKeeperWatcher(conf, node, null, false);
    List<ACL> aclList = ZKUtil.createACL(watcher, node, true);
    // 1+1, since ACL will be set for the creator by default
    Assert.assertEquals(aclList.size(), 2);
    Assert.assertTrue(aclList.contains(new ACL(Perms.ALL, new Id("sasl", "user1"))));
    Assert.assertTrue(aclList.contains(Ids.CREATOR_ALL_ACL.iterator().next()));
}
Also used : HBaseConfiguration(org.apache.hadoop.hbase.HBaseConfiguration) Configuration(org.apache.hadoop.conf.Configuration) ACL(org.apache.zookeeper.data.ACL) Id(org.apache.zookeeper.data.Id) Test(org.junit.Test)

Aggregations

ACL (org.apache.zookeeper.data.ACL)108 Id (org.apache.zookeeper.data.Id)43 Test (org.junit.Test)43 ArrayList (java.util.ArrayList)33 Stat (org.apache.zookeeper.data.Stat)19 KeeperException (org.apache.zookeeper.KeeperException)17 Configuration (org.apache.hadoop.conf.Configuration)10 ZooKeeper (org.apache.zookeeper.ZooKeeper)10 Test (org.testng.annotations.Test)9 CuratorFramework (org.apache.curator.framework.CuratorFramework)8 IOException (java.io.IOException)6 File (java.io.File)5 ACLProvider (org.apache.curator.framework.api.ACLProvider)5 TestableZooKeeper (org.apache.zookeeper.TestableZooKeeper)5 HashMap (java.util.HashMap)4 List (java.util.List)4 Map (java.util.Map)4 ByteArrayInputStream (java.io.ByteArrayInputStream)3 ByteArrayOutputStream (java.io.ByteArrayOutputStream)3 HBaseConfiguration (org.apache.hadoop.hbase.HBaseConfiguration)3