Search in sources :

Example 66 with KeeperException

use of org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.KeeperException in project pulsar by yahoo.

the class BookKeeperClientFactoryImpl method create.

@Override
public BookKeeper create(ServiceConfiguration conf, ZooKeeper zkClient) throws IOException {
    ClientConfiguration bkConf = new ClientConfiguration();
    if (conf.getBookkeeperClientAuthenticationPlugin() != null && conf.getBookkeeperClientAuthenticationPlugin().trim().length() > 0) {
        bkConf.setClientAuthProviderFactoryClass(conf.getBookkeeperClientAuthenticationPlugin());
        bkConf.setProperty(conf.getBookkeeperClientAuthenticationParametersName(), conf.getBookkeeperClientAuthenticationParameters());
    }
    bkConf.setThrottleValue(0);
    bkConf.setAddEntryTimeout((int) conf.getBookkeeperClientTimeoutInSeconds());
    bkConf.setReadEntryTimeout((int) conf.getBookkeeperClientTimeoutInSeconds());
    bkConf.setSpeculativeReadTimeout(conf.getBookkeeperClientSpeculativeReadTimeoutInMillis());
    bkConf.setNumChannelsPerBookie(16);
    bkConf.setUseV2WireProtocol(true);
    bkConf.setLedgerManagerFactoryClassName(HierarchicalLedgerManagerFactory.class.getName());
    if (conf.isBookkeeperClientHealthCheckEnabled()) {
        bkConf.enableBookieHealthCheck();
        bkConf.setBookieHealthCheckInterval(conf.getBookkeeperHealthCheckIntervalSec(), TimeUnit.SECONDS);
        bkConf.setBookieErrorThresholdPerInterval(conf.getBookkeeperClientHealthCheckErrorThresholdPerInterval());
        bkConf.setBookieQuarantineTime((int) conf.getBookkeeperClientHealthCheckQuarantineTimeInSeconds(), TimeUnit.SECONDS);
    }
    if (conf.isBookkeeperClientRackawarePolicyEnabled()) {
        bkConf.setEnsemblePlacementPolicy(RackawareEnsemblePlacementPolicy.class);
        bkConf.setProperty(RackawareEnsemblePlacementPolicy.REPP_DNS_RESOLVER_CLASS, ZkBookieRackAffinityMapping.class.getName());
        bkConf.setProperty(ZooKeeperCache.ZK_CACHE_INSTANCE, new ZooKeeperCache(zkClient) {
        });
    }
    if (conf.getBookkeeperClientIsolationGroups() != null && !conf.getBookkeeperClientIsolationGroups().isEmpty()) {
        bkConf.setEnsemblePlacementPolicy(ZkIsolatedBookieEnsemblePlacementPolicy.class);
        bkConf.setProperty(ZkIsolatedBookieEnsemblePlacementPolicy.ISOLATION_BOOKIE_GROUPS, conf.getBookkeeperClientIsolationGroups());
        if (bkConf.getProperty(ZooKeeperCache.ZK_CACHE_INSTANCE) == null) {
            bkConf.setProperty(ZooKeeperCache.ZK_CACHE_INSTANCE, new ZooKeeperCache(zkClient) {
            });
        }
    }
    try {
        return new BookKeeper(bkConf, zkClient);
    } catch (InterruptedException | KeeperException e) {
        throw new IOException(e);
    }
}
Also used : BookKeeper(org.apache.bookkeeper.client.BookKeeper) HierarchicalLedgerManagerFactory(org.apache.bookkeeper.meta.HierarchicalLedgerManagerFactory) IOException(java.io.IOException) ZkBookieRackAffinityMapping(com.yahoo.pulsar.zookeeper.ZkBookieRackAffinityMapping) ClientConfiguration(org.apache.bookkeeper.conf.ClientConfiguration) ZooKeeperCache(com.yahoo.pulsar.zookeeper.ZooKeeperCache) KeeperException(org.apache.zookeeper.KeeperException)

Example 67 with KeeperException

use of org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.KeeperException in project pulsar by yahoo.

the class Clusters method deleteNamespaceIsolationPolicy.

@DELETE
@Path("/{cluster}/namespaceIsolationPolicies/{policyName}")
@ApiOperation(value = "Delete namespace isolation policy")
@ApiResponses(value = { @ApiResponse(code = 403, message = "Don't have admin permission or plicy is read only"), @ApiResponse(code = 412, message = "Cluster doesn't exist") })
public void deleteNamespaceIsolationPolicy(@PathParam("cluster") String cluster, @PathParam("policyName") String policyName) throws Exception {
    validateSuperUserAccess();
    validateClusterExists(cluster);
    validatePoliciesReadOnlyAccess();
    try {
        String nsIsolationPolicyPath = path("clusters", cluster, "namespaceIsolationPolicies");
        NamespaceIsolationPolicies nsIsolationPolicies = namespaceIsolationPoliciesCache().get(nsIsolationPolicyPath).orElseGet(() -> {
            try {
                this.createNamespaceIsolationPolicyNode(nsIsolationPolicyPath);
                return new NamespaceIsolationPolicies();
            } catch (KeeperException | InterruptedException e) {
                throw new RestException(e);
            }
        });
        nsIsolationPolicies.deletePolicy(policyName);
        globalZk().setData(nsIsolationPolicyPath, jsonMapper().writeValueAsBytes(nsIsolationPolicies.getPolicies()), -1);
        // make sure that the cache content will be refreshed for the next read access
        namespaceIsolationPoliciesCache().invalidate(nsIsolationPolicyPath);
    } catch (KeeperException.NoNodeException nne) {
        log.warn("[{}] Failed to update brokers/{}/namespaceIsolationPolicies: Does not exist", clientAppId(), cluster);
        throw new RestException(Status.NOT_FOUND, "NamespaceIsolationPolicies for cluster " + cluster + " does not exist");
    } catch (Exception e) {
        log.error("[{}] Failed to update brokers/{}/namespaceIsolationPolicies/{}", clientAppId(), cluster, policyName, e);
        throw new RestException(e);
    }
}
Also used : NamespaceIsolationPolicies(com.yahoo.pulsar.common.policies.impl.NamespaceIsolationPolicies) RestException(com.yahoo.pulsar.broker.web.RestException) KeeperException(org.apache.zookeeper.KeeperException) RestException(com.yahoo.pulsar.broker.web.RestException) JsonGenerationException(com.fasterxml.jackson.core.JsonGenerationException) KeeperException(org.apache.zookeeper.KeeperException) IOException(java.io.IOException) JsonMappingException(com.fasterxml.jackson.databind.JsonMappingException) Path(javax.ws.rs.Path) DELETE(javax.ws.rs.DELETE) ApiOperation(io.swagger.annotations.ApiOperation) ApiResponses(io.swagger.annotations.ApiResponses)

Example 68 with KeeperException

use of org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.KeeperException in project hadoop by apache.

the class ZKFailoverController method getCurrentActive.

/**
   * @return an {@link HAServiceTarget} for the current active node
   * in the cluster, or null if no node is active.
   * @throws IOException if a ZK-related issue occurs
   * @throws InterruptedException if thread is interrupted 
   */
private HAServiceTarget getCurrentActive() throws IOException, InterruptedException {
    synchronized (elector) {
        synchronized (this) {
            byte[] activeData;
            try {
                activeData = elector.getActiveData();
            } catch (ActiveNotFoundException e) {
                return null;
            } catch (KeeperException ke) {
                throw new IOException("Unexpected ZooKeeper issue fetching active node info", ke);
            }
            HAServiceTarget oldActive = dataToTarget(activeData);
            return oldActive;
        }
    }
}
Also used : ActiveNotFoundException(org.apache.hadoop.ha.ActiveStandbyElector.ActiveNotFoundException) IOException(java.io.IOException) KeeperException(org.apache.zookeeper.KeeperException)

Example 69 with KeeperException

use of org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.KeeperException in project hadoop by apache.

the class ActiveStandbyElector method fenceOldActive.

/**
   * If there is a breadcrumb node indicating that another node may need
   * fencing, try to fence that node.
   * @return the Stat of the breadcrumb node that was read, or null
   * if no breadcrumb node existed
   */
private Stat fenceOldActive() throws InterruptedException, KeeperException {
    final Stat stat = new Stat();
    byte[] data;
    LOG.info("Checking for any old active which needs to be fenced...");
    try {
        data = zkDoWithRetries(new ZKAction<byte[]>() {

            @Override
            public byte[] run() throws KeeperException, InterruptedException {
                return zkClient.getData(zkBreadCrumbPath, false, stat);
            }
        });
    } catch (KeeperException ke) {
        if (isNodeDoesNotExist(ke.code())) {
            LOG.info("No old node to fence");
            return null;
        }
        // thing is the best bet.
        throw ke;
    }
    LOG.info("Old node exists: " + StringUtils.byteToHexString(data));
    if (Arrays.equals(data, appData)) {
        LOG.info("But old node has our own data, so don't need to fence it.");
    } else {
        appClient.fenceOldActive(data);
    }
    return stat;
}
Also used : Stat(org.apache.zookeeper.data.Stat) KeeperException(org.apache.zookeeper.KeeperException)

Example 70 with KeeperException

use of org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.KeeperException in project nifi by apache.

the class ZooKeeperMigrator method readZooKeeper.

void readZooKeeper(OutputStream zkData, AuthMode authMode, byte[] authData) throws IOException, KeeperException, InterruptedException, ExecutionException {
    ZooKeeper zooKeeper = getZooKeeper(zooKeeperEndpointConfig.getConnectString(), authMode, authData);
    JsonWriter jsonWriter = new JsonWriter(new BufferedWriter(new OutputStreamWriter(zkData)));
    jsonWriter.setIndent("  ");
    JsonParser jsonParser = new JsonParser();
    Gson gson = new GsonBuilder().create();
    jsonWriter.beginArray();
    // persist source ZooKeeperEndpointConfig
    gson.toJson(jsonParser.parse(gson.toJson(zooKeeperEndpointConfig)).getAsJsonObject(), jsonWriter);
    LOGGER.info("Retrieving data from source ZooKeeper: {}", zooKeeperEndpointConfig);
    final List<CompletableFuture<Void>> readFutures = streamPaths(getNode(zooKeeper, "/")).parallel().map(node -> CompletableFuture.supplyAsync(() -> {
        final DataStatAclNode dataStatAclNode = retrieveNode(zooKeeper, node);
        LOGGER.debug("retrieved node {} from {}", dataStatAclNode, zooKeeperEndpointConfig);
        return dataStatAclNode;
    }).thenAccept(dataStatAclNode -> {
        // persist each zookeeper node
        synchronized (jsonWriter) {
            gson.toJson(jsonParser.parse(gson.toJson(dataStatAclNode)).getAsJsonObject(), jsonWriter);
        }
    })).collect(Collectors.toList());
    CompletableFuture<Void> allReadsFuture = CompletableFuture.allOf(readFutures.toArray(new CompletableFuture[readFutures.size()]));
    final CompletableFuture<List<Void>> finishedReads = allReadsFuture.thenApply(v -> readFutures.stream().map(CompletableFuture::join).collect(Collectors.toList()));
    final List<Void> readsDone = finishedReads.get();
    jsonWriter.endArray();
    jsonWriter.close();
    if (LOGGER.isInfoEnabled()) {
        final int readCount = readsDone.size();
        LOGGER.info("{} {} read from {}", readCount, readCount == 1 ? "node" : "nodes", zooKeeperEndpointConfig);
    }
    closeZooKeeper(zooKeeper);
}
Also used : CreateMode(org.apache.zookeeper.CreateMode) Spliterators(java.util.Spliterators) BiFunction(java.util.function.BiFunction) LoggerFactory(org.slf4j.LoggerFactory) ACL(org.apache.zookeeper.data.ACL) CompletableFuture(java.util.concurrent.CompletableFuture) Stat(org.apache.zookeeper.data.Stat) JsonParser(com.google.gson.JsonParser) Function(java.util.function.Function) GsonBuilder(com.google.gson.GsonBuilder) JsonReader(com.google.gson.stream.JsonReader) ArrayList(java.util.ArrayList) Strings(com.google.common.base.Strings) Gson(com.google.gson.Gson) OutputStreamWriter(java.io.OutputStreamWriter) StreamSupport(java.util.stream.StreamSupport) Splitter(com.google.common.base.Splitter) JsonWriter(com.google.gson.stream.JsonWriter) ZooKeeper(org.apache.zookeeper.ZooKeeper) OutputStream(java.io.OutputStream) Logger(org.slf4j.Logger) KeeperException(org.apache.zookeeper.KeeperException) Watcher(org.apache.zookeeper.Watcher) BufferedWriter(java.io.BufferedWriter) IOException(java.io.IOException) InputStreamReader(java.io.InputStreamReader) Collectors(java.util.stream.Collectors) ExecutionException(java.util.concurrent.ExecutionException) TimeUnit(java.util.concurrent.TimeUnit) Consumer(java.util.function.Consumer) CountDownLatch(java.util.concurrent.CountDownLatch) List(java.util.List) CompletionStage(java.util.concurrent.CompletionStage) Stream(java.util.stream.Stream) ZooDefs(org.apache.zookeeper.ZooDefs) Preconditions(com.google.common.base.Preconditions) BufferedReader(java.io.BufferedReader) Collections(java.util.Collections) Joiner(com.google.common.base.Joiner) InputStream(java.io.InputStream) GsonBuilder(com.google.gson.GsonBuilder) Gson(com.google.gson.Gson) JsonWriter(com.google.gson.stream.JsonWriter) BufferedWriter(java.io.BufferedWriter) CompletableFuture(java.util.concurrent.CompletableFuture) ZooKeeper(org.apache.zookeeper.ZooKeeper) OutputStreamWriter(java.io.OutputStreamWriter) ArrayList(java.util.ArrayList) List(java.util.List) JsonParser(com.google.gson.JsonParser)

Aggregations

KeeperException (org.apache.zookeeper.KeeperException)566 IOException (java.io.IOException)188 Stat (org.apache.zookeeper.data.Stat)127 ZooKeeper (org.apache.zookeeper.ZooKeeper)87 ArrayList (java.util.ArrayList)51 NoNodeException (org.apache.zookeeper.KeeperException.NoNodeException)45 Watcher (org.apache.zookeeper.Watcher)39 WatchedEvent (org.apache.zookeeper.WatchedEvent)38 Test (org.junit.jupiter.api.Test)38 CountDownLatch (java.util.concurrent.CountDownLatch)30 SolrException (org.apache.solr.common.SolrException)30 HashMap (java.util.HashMap)29 List (java.util.List)28 ACL (org.apache.zookeeper.data.ACL)27 Test (org.junit.Test)27 HeliosRuntimeException (com.spotify.helios.common.HeliosRuntimeException)25 ServerName (org.apache.hadoop.hbase.ServerName)24 Map (java.util.Map)23 IZooReaderWriter (org.apache.accumulo.fate.zookeeper.IZooReaderWriter)23 InterruptedIOException (java.io.InterruptedIOException)20