Search in sources :

Example 1 with MasterRegistryFetchException

use of org.apache.hadoop.hbase.exceptions.MasterRegistryFetchException in project hbase by apache.

the class AbstractRpcBasedConnectionRegistry method groupCall.

/**
 * send requests concurrently to hedgedReadsFanout end points. If any of the request is succeeded,
 * we will complete the future and quit. If all the requests in one round are failed, we will
 * start another round to send requests concurrently tohedgedReadsFanout end points. If all end
 * points have been tried and all of them are failed, we will fail the future.
 */
private <T extends Message> void groupCall(CompletableFuture<T> future, Set<ServerName> servers, List<ClientMetaService.Interface> stubs, int startIndexInclusive, Callable<T> callable, Predicate<T> isValidResp, String debug, ConcurrentLinkedQueue<Throwable> errors) {
    int endIndexExclusive = Math.min(startIndexInclusive + hedgedReadFanOut, stubs.size());
    AtomicInteger remaining = new AtomicInteger(endIndexExclusive - startIndexInclusive);
    for (int i = startIndexInclusive; i < endIndexExclusive; i++) {
        addListener(call(stubs.get(i), callable), (r, e) -> {
            // a simple check to skip all the later operations earlier
            if (future.isDone()) {
                return;
            }
            if (e == null && !isValidResp.test(r)) {
                e = badResponse(debug);
            }
            if (e != null) {
                // make sure when remaining reaches 0 we have all exceptions in the errors queue
                errors.add(e);
                if (remaining.decrementAndGet() == 0) {
                    if (endIndexExclusive == stubs.size()) {
                        // we are done, complete the future with exception
                        RetriesExhaustedException ex = new RetriesExhaustedException("masters", stubs.size(), new ArrayList<>(errors));
                        future.completeExceptionally(new MasterRegistryFetchException(servers, ex));
                    } else {
                        groupCall(future, servers, stubs, endIndexExclusive, callable, isValidResp, debug, errors);
                    }
                }
            } else {
                // do not need to decrement the counter any more as we have already finished the future.
                future.complete(r);
            }
        });
    }
}
Also used : MasterRegistryFetchException(org.apache.hadoop.hbase.exceptions.MasterRegistryFetchException) AtomicInteger(java.util.concurrent.atomic.AtomicInteger)

Example 2 with MasterRegistryFetchException

use of org.apache.hadoop.hbase.exceptions.MasterRegistryFetchException in project hbase by apache.

the class TestClientTimeouts method testAdminTimeout.

/**
 * Test that a client that fails an RPC to the master retries properly and doesn't throw any
 * unexpected exceptions.
 */
@Test
public void testAdminTimeout() throws Exception {
    boolean lastFailed = false;
    int initialInvocations = invokations.get();
    RandomTimeoutRpcClient rpcClient = (RandomTimeoutRpcClient) RpcClientFactory.createClient(TEST_UTIL.getConfiguration(), TEST_UTIL.getClusterKey());
    try {
        for (int i = 0; i < 5 || (lastFailed && i < 100); ++i) {
            lastFailed = false;
            // Ensure the HBaseAdmin uses a new connection by changing Configuration.
            Configuration conf = HBaseConfiguration.create(TEST_UTIL.getConfiguration());
            conf.set(HConstants.HBASE_CLIENT_INSTANCE_ID, String.valueOf(-1));
            Admin admin = null;
            Connection connection = null;
            try {
                connection = ConnectionFactory.createConnection(conf);
                admin = connection.getAdmin();
                admin.balancerSwitch(false, false);
            } catch (MasterRegistryFetchException ex) {
                // Since we are randomly throwing SocketTimeoutExceptions, it is possible to get
                // a MasterRegistryFetchException. It's a bug if we get other exceptions.
                lastFailed = true;
            } finally {
                if (admin != null) {
                    admin.close();
                    if (admin.getConnection().isClosed()) {
                        rpcClient = (RandomTimeoutRpcClient) RpcClientFactory.createClient(TEST_UTIL.getConfiguration(), TEST_UTIL.getClusterKey());
                    }
                }
                if (connection != null) {
                    connection.close();
                }
            }
        }
        // Ensure the RandomTimeoutRpcEngine is actually being used.
        assertFalse(lastFailed);
        assertTrue(invokations.get() > initialInvocations);
    } finally {
        rpcClient.close();
    }
}
Also used : MasterRegistryFetchException(org.apache.hadoop.hbase.exceptions.MasterRegistryFetchException) Configuration(org.apache.hadoop.conf.Configuration) HBaseConfiguration(org.apache.hadoop.hbase.HBaseConfiguration) Test(org.junit.Test)

Example 3 with MasterRegistryFetchException

use of org.apache.hadoop.hbase.exceptions.MasterRegistryFetchException in project hbase by apache.

the class CustomSaslAuthenticationProviderTestBase method testNegativeAuthentication.

@Test
public void testNegativeAuthentication() throws Exception {
    // Validate that we can read that record back out as the user with our custom auth'n
    UserGroupInformation user1 = UserGroupInformation.createUserForTesting("user1", new String[0]);
    user1.addToken(createPasswordToken("user1", "definitely not the password", clusterId));
    user1.doAs(new PrivilegedExceptionAction<Void>() {

        @Override
        public Void run() throws Exception {
            Configuration clientConf = getClientConf();
            clientConf.setInt(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 1);
            // should still be a SaslException in both the cases.
            try (Connection conn = ConnectionFactory.createConnection(clientConf);
                Table t = conn.getTable(tableName)) {
                t.get(new Get(Bytes.toBytes("r1")));
                fail("Should not successfully authenticate with HBase");
            } catch (MasterRegistryFetchException mfe) {
                Throwable cause = mfe.getCause();
                assertTrue(cause.getMessage(), cause.getMessage().contains("SaslException"));
            } catch (RetriesExhaustedException re) {
                assertTrue(re.getMessage(), re.getMessage().contains("SaslException"));
            } catch (Exception e) {
                // Any other exception is unexpected.
                fail("Unexpected exception caught, was expecting a authentication error: " + Throwables.getStackTraceAsString(e));
            }
            return null;
        }
    });
}
Also used : MasterRegistryFetchException(org.apache.hadoop.hbase.exceptions.MasterRegistryFetchException) Table(org.apache.hadoop.hbase.client.Table) Configuration(org.apache.hadoop.conf.Configuration) RetriesExhaustedException(org.apache.hadoop.hbase.client.RetriesExhaustedException) Get(org.apache.hadoop.hbase.client.Get) Connection(org.apache.hadoop.hbase.client.Connection) UnsupportedCallbackException(javax.security.auth.callback.UnsupportedCallbackException) AccessDeniedException(org.apache.hadoop.hbase.security.AccessDeniedException) IOException(java.io.IOException) RetriesExhaustedException(org.apache.hadoop.hbase.client.RetriesExhaustedException) MasterRegistryFetchException(org.apache.hadoop.hbase.exceptions.MasterRegistryFetchException) UserGroupInformation(org.apache.hadoop.security.UserGroupInformation) Test(org.junit.Test)

Aggregations

MasterRegistryFetchException (org.apache.hadoop.hbase.exceptions.MasterRegistryFetchException)3 Configuration (org.apache.hadoop.conf.Configuration)2 Test (org.junit.Test)2 IOException (java.io.IOException)1 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)1 UnsupportedCallbackException (javax.security.auth.callback.UnsupportedCallbackException)1 HBaseConfiguration (org.apache.hadoop.hbase.HBaseConfiguration)1 Connection (org.apache.hadoop.hbase.client.Connection)1 Get (org.apache.hadoop.hbase.client.Get)1 RetriesExhaustedException (org.apache.hadoop.hbase.client.RetriesExhaustedException)1 Table (org.apache.hadoop.hbase.client.Table)1 AccessDeniedException (org.apache.hadoop.hbase.security.AccessDeniedException)1 UserGroupInformation (org.apache.hadoop.security.UserGroupInformation)1