Search in sources :

Example 21 with RetryCounter

use of org.apache.hadoop.hbase.util.RetryCounter in project hbase by apache.

the class RecoverableZooKeeper method createNonSequential.

private String createNonSequential(String path, byte[] data, List<ACL> acl, CreateMode createMode) throws KeeperException, InterruptedException {
    RetryCounter retryCounter = retryCounterFactory.create();
    // False for first attempt, true for all retries.
    boolean isRetry = false;
    while (true) {
        try {
            return checkZk().create(path, data, acl, createMode);
        } catch (KeeperException e) {
            switch(e.code()) {
                case NODEEXISTS:
                    if (isRetry) {
                        // If the connection was lost, there is still a possibility that
                        // we have successfully created the node at our previous attempt,
                        // so we read the node and compare.
                        byte[] currentData = checkZk().getData(path, false, null);
                        if (currentData != null && Bytes.compareTo(currentData, data) == 0) {
                            // We successfully created a non-sequential node
                            return path;
                        }
                        LOG.error("Node " + path + " already exists with " + Bytes.toStringBinary(currentData) + ", could not write " + Bytes.toStringBinary(data));
                        throw e;
                    }
                    LOG.trace("Node {} already exists", path);
                    throw e;
                case CONNECTIONLOSS:
                case OPERATIONTIMEOUT:
                case REQUESTTIMEOUT:
                    retryOrThrow(retryCounter, e, "create");
                    break;
                default:
                    throw e;
            }
        }
        retryCounter.sleepUntilNextRetry();
        isRetry = true;
    }
}
Also used : RetryCounter(org.apache.hadoop.hbase.util.RetryCounter) KeeperException(org.apache.zookeeper.KeeperException)

Example 22 with RetryCounter

use of org.apache.hadoop.hbase.util.RetryCounter in project hbase by apache.

the class RecoverableZooKeeper method createSequential.

private String createSequential(String path, byte[] data, List<ACL> acl, CreateMode createMode) throws KeeperException, InterruptedException {
    RetryCounter retryCounter = retryCounterFactory.create();
    boolean first = true;
    String newPath = path + this.identifier;
    while (true) {
        try {
            if (!first) {
                // Check if we succeeded on a previous attempt
                String previousResult = findPreviousSequentialNode(newPath);
                if (previousResult != null) {
                    return previousResult;
                }
            }
            first = false;
            return checkZk().create(newPath, data, acl, createMode);
        } catch (KeeperException e) {
            switch(e.code()) {
                case CONNECTIONLOSS:
                case OPERATIONTIMEOUT:
                case REQUESTTIMEOUT:
                    retryOrThrow(retryCounter, e, "create");
                    break;
                default:
                    throw e;
            }
        }
        retryCounter.sleepUntilNextRetry();
    }
}
Also used : RetryCounter(org.apache.hadoop.hbase.util.RetryCounter) KeeperException(org.apache.zookeeper.KeeperException)

Example 23 with RetryCounter

use of org.apache.hadoop.hbase.util.RetryCounter in project hbase by apache.

the class RecoverableZooKeeper method delete.

/**
 * delete is an idempotent operation. Retry before throwing exception.
 * This function will not throw NoNodeException if the path does not
 * exist.
 */
public void delete(String path, int version) throws InterruptedException, KeeperException {
    Span span = TraceUtil.getGlobalTracer().spanBuilder("RecoverableZookeeper.delete").startSpan();
    try (Scope scope = span.makeCurrent()) {
        RetryCounter retryCounter = retryCounterFactory.create();
        // False for first attempt, true for all retries.
        boolean isRetry = false;
        while (true) {
            try {
                checkZk().delete(path, version);
                return;
            } catch (KeeperException e) {
                switch(e.code()) {
                    case NONODE:
                        if (isRetry) {
                            LOG.debug("Node " + path + " already deleted. Assuming a " + "previous attempt succeeded.");
                            return;
                        }
                        LOG.debug("Node {} already deleted, retry={}", path, isRetry);
                        throw e;
                    case CONNECTIONLOSS:
                    case OPERATIONTIMEOUT:
                    case REQUESTTIMEOUT:
                        retryOrThrow(retryCounter, e, "delete");
                        break;
                    default:
                        throw e;
                }
            }
            retryCounter.sleepUntilNextRetry();
            isRetry = true;
        }
    } finally {
        span.end();
    }
}
Also used : Scope(io.opentelemetry.context.Scope) RetryCounter(org.apache.hadoop.hbase.util.RetryCounter) Span(io.opentelemetry.api.trace.Span) KeeperException(org.apache.zookeeper.KeeperException)

Aggregations

RetryCounter (org.apache.hadoop.hbase.util.RetryCounter)23 KeeperException (org.apache.zookeeper.KeeperException)16 TraceScope (org.apache.htrace.TraceScope)8 IOException (java.io.IOException)7 ArrayList (java.util.ArrayList)4 Arrays (java.util.Arrays)4 Collections (java.util.Collections)4 List (java.util.List)4 TimeUnit (java.util.concurrent.TimeUnit)4 Bytes (org.apache.hadoop.hbase.util.Bytes)4 Threads (org.apache.hadoop.hbase.util.Threads)4 Closeables (org.apache.hbase.thirdparty.com.google.common.io.Closeables)4 InetAddress (java.net.InetAddress)3 UnknownHostException (java.net.UnknownHostException)3 Collection (java.util.Collection)3 Iterator (java.util.Iterator)3 Map (java.util.Map)3 Random (java.util.Random)3 Set (java.util.Set)3 Configuration (org.apache.hadoop.conf.Configuration)3