Search in sources :

Example 66 with BiConsumer

use of java.util.function.BiConsumer in project redisson by redisson.

the class CommandAsyncService method allAsync.

private <T, R> RFuture<R> allAsync(boolean readOnlyMode, Codec codec, RedisCommand<T> command, SlotCallback<T, R> callback, Object... params) {
    RPromise<R> mainPromise = new RedissonPromise<R>();
    Collection<MasterSlaveEntry> nodes = connectionManager.getEntrySet();
    AtomicInteger counter = new AtomicInteger(nodes.size());
    BiConsumer<T, Throwable> listener = new BiConsumer<T, Throwable>() {

        @Override
        public void accept(T result, Throwable u) {
            if (u != null && !(u instanceof RedisRedirectException)) {
                mainPromise.tryFailure(u);
                return;
            }
            if (u instanceof RedisRedirectException) {
                result = command.getConvertor().convert(result);
            }
            if (callback != null) {
                callback.onSlotResult(result);
            }
            if (counter.decrementAndGet() == 0) {
                if (callback != null) {
                    mainPromise.trySuccess(callback.onFinish());
                } else {
                    mainPromise.trySuccess(null);
                }
            }
        }
    };
    for (MasterSlaveEntry entry : nodes) {
        RFuture<T> promise = async(readOnlyMode, new NodeSource(entry), codec, command, params, true, false);
        promise.whenComplete(listener);
    }
    return mainPromise;
}
Also used : RedissonPromise(org.redisson.misc.RedissonPromise) RedisRedirectException(org.redisson.client.RedisRedirectException) NodeSource(org.redisson.connection.NodeSource) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) MasterSlaveEntry(org.redisson.connection.MasterSlaveEntry) BiConsumer(java.util.function.BiConsumer)

Example 67 with BiConsumer

use of java.util.function.BiConsumer in project cassandra by apache.

the class CASTest method consistencyAfterWriteTimeoutTest.

/**
 * Base test to ensure that if a write times out but with a proposal accepted by some nodes (less then quorum), and
 * a following SERIAL operation does not observe that write (the node having accepted it do not participate in that
 * following operation), then that write is never applied, even when the nodes having accepted the original proposal
 * participate.
 *
 * <p>In other words, if an operation timeout, it may or may not be applied, but that "fate" is persistently decided
 * by the very SERIAL operation that "succeed" (in the sense of 'not timing out or throwing some other exception').
 *
 * @param postTimeoutOperation1 a SERIAL operation executed after an initial write that inserts the row [0, 0] times
 *                              out. It is executed with a QUORUM of nodes that have _not_ see the timed out
 *                              proposal, and so that operation should expect that the [0, 0] write has not taken
 *                              place.
 * @param postTimeoutOperation2 a 2nd SERIAL operation executed _after_ {@code postTimeoutOperation1}, with no
 *                              write executed between the 2 operation. Contrarily to the 1st operation, the QORUM
 *                              for this operation _will_ include the node that got the proposal for the [0, 0]
 *                              insert but didn't participated to {@code postTimeoutOperation1}}. That operation
 *                              should also no witness that [0, 0] write (since {@code postTimeoutOperation1}
 *                              didn't).
 * @param loseCommitOfOperation1 if {@code true}, the test will also drop the "commits" messages for
 *                               {@code postTimeoutOperation1}. In general, the test should behave the same with or
 *                               without that flag since a value is decided as soon as it has been "accepted by
 *                               quorum" and the commits should always be properly replayed.
 */
private void consistencyAfterWriteTimeoutTest(BiConsumer<String, ICoordinator> postTimeoutOperation1, BiConsumer<String, ICoordinator> postTimeoutOperation2, boolean loseCommitOfOperation1) throws IOException {
    // not about performance, this is probably ok, even if we ideally should dug into the underlying reason.
    try (Cluster cluster = init(Cluster.create(3, config -> config.set("write_request_timeout", "4000ms").set("cas_contention_timeout", CONTENTION_TIMEOUT)))) {
        String table = KEYSPACE + ".t";
        cluster.schemaChange("CREATE TABLE " + table + " (k int PRIMARY KEY, v int)");
        // We do a CAS insertion, but have with the PROPOSE message dropped on node 1 and 2. The CAS will not get
        // through and should timeout. Importantly, node 3 does receive and answer the PROPOSE.
        IMessageFilters.Filter dropProposeFilter = cluster.filters().inbound().verbs(PAXOS_PROPOSE_REQ.id).from(3).to(1, 2).drop();
        try {
            // NOTE: the consistency below is the "commit" one, so it doesn't matter at all here.
            // NOTE 2: we use node 3 as coordinator because message filters don't currently work for locally
            // delivered messages and as we want to drop messages to 1 and 2, we can't use them.
            cluster.coordinator(3).execute("INSERT INTO " + table + "(k, v) VALUES (0, 0) IF NOT EXISTS", ConsistencyLevel.ONE);
            fail("The insertion should have timed-out");
        } catch (Exception e) {
            // be improved at the dtest API level.
            if (!e.getClass().getSimpleName().equals("CasWriteTimeoutException"))
                throw e;
        } finally {
            dropProposeFilter.off();
        }
        // Isolates node 3 and executes the SERIAL operation. As neither node 1 or 2 got the initial insert proposal,
        // there is nothing to "replay" and the operation should assert the table is still empty.
        IMessageFilters.Filter ignoreNode3Filter = cluster.filters().verbs(paxosAndReadVerbs()).to(3).drop();
        IMessageFilters.Filter dropCommitFilter = null;
        if (loseCommitOfOperation1) {
            dropCommitFilter = cluster.filters().verbs(PAXOS_COMMIT_REQ.id).to(1, 2).drop();
        }
        try {
            postTimeoutOperation1.accept(table, cluster.coordinator(1));
        } finally {
            ignoreNode3Filter.off();
            if (dropCommitFilter != null)
                dropCommitFilter.off();
        }
        // Node 3 is now back and we isolate node 2 to ensure the next read hits node 1 and 3.
        // What we want to ensure is that despite node 3 having the initial insert in its paxos state in a position of
        // being replayed, that insert is _not_ replayed (it would contradict serializability since the previous
        // operation asserted nothing was inserted). It is this execution that failed before CASSANDRA-12126.
        IMessageFilters.Filter ignoreNode2Filter = cluster.filters().verbs(paxosAndReadVerbs()).to(2).drop();
        try {
            postTimeoutOperation2.accept(table, cluster.coordinator(1));
        } finally {
            ignoreNode2Filter.off();
        }
    }
}
Also used : IMessageFilters(org.apache.cassandra.distributed.api.IMessageFilters) TableId(org.apache.cassandra.schema.TableId) IInstance(org.apache.cassandra.distributed.api.IInstance) PAXOS_PREPARE_REQ(org.apache.cassandra.net.Verb.PAXOS_PREPARE_REQ) PAXOS_PROPOSE_REQ(org.apache.cassandra.net.Verb.PAXOS_PROPOSE_REQ) Int32Type(org.apache.cassandra.db.marshal.Int32Type) Token(org.apache.cassandra.dht.Token) UnsafeGossipHelper(org.apache.cassandra.distributed.impl.UnsafeGossipHelper) READ_REQ(org.apache.cassandra.net.Verb.READ_REQ) ICoordinator(org.apache.cassandra.distributed.api.ICoordinator) BiConsumer(java.util.function.BiConsumer) Murmur3Partitioner(org.apache.cassandra.dht.Murmur3Partitioner) AssertUtils.fail(org.apache.cassandra.distributed.shared.AssertUtils.fail) Keyspace(org.apache.cassandra.db.Keyspace) AssertUtils.row(org.apache.cassandra.distributed.shared.AssertUtils.row) StorageService(org.apache.cassandra.service.StorageService) Assert.assertTrue(org.junit.Assert.assertTrue) IOException(java.io.IOException) Test(org.junit.Test) UUID(java.util.UUID) ConsistencyLevel(org.apache.cassandra.distributed.api.ConsistencyLevel) UUIDGen(org.apache.cassandra.utils.UUIDGen) AssertUtils.assertRows(org.apache.cassandra.distributed.shared.AssertUtils.assertRows) Ignore(org.junit.Ignore) Assert.assertFalse(org.junit.Assert.assertFalse) Cluster(org.apache.cassandra.distributed.Cluster) Assert(org.junit.Assert) PAXOS_COMMIT_REQ(org.apache.cassandra.net.Verb.PAXOS_COMMIT_REQ) IMessageFilters(org.apache.cassandra.distributed.api.IMessageFilters) Cluster(org.apache.cassandra.distributed.Cluster) IOException(java.io.IOException)

Example 68 with BiConsumer

use of java.util.function.BiConsumer in project cassandra by apache.

the class CellSpecTest method data.

@Parameterized.Parameters(name = "{0}")
public static Collection<Object[]> data() {
    TableMetadata table = TableMetadata.builder("testing", "testing").addPartitionKeyColumn("pk", BytesType.instance).build();
    byte[] rawBytes = { 0, 1, 2, 3, 4, 5, 6 };
    ByteBuffer bbBytes = ByteBuffer.wrap(rawBytes);
    NativePool pool = new NativePool(1024, 1024, 1, () -> ImmediateFuture.success(true));
    NativeAllocator allocator = pool.newAllocator(null);
    OpOrder order = new OpOrder();
    List<Cell<?>> tests = new ArrayList<>();
    BiConsumer<ColumnMetadata, CellPath> fn = (column, path) -> {
        tests.add(new ArrayCell(column, 1234, 1, 1, rawBytes, path));
        tests.add(new BufferCell(column, 1234, 1, 1, bbBytes, path));
        tests.add(new NativeCell(allocator, order.getCurrent(), column, 1234, 1, 1, bbBytes, path));
    };
    // simple
    fn.accept(ColumnMetadata.regularColumn(table, bytes("simple"), BytesType.instance), null);
    // complex
    // seems NativeCell does not allow CellPath.TOP, or CellPath.BOTTOM
    fn.accept(ColumnMetadata.regularColumn(table, bytes("complex"), ListType.getInstance(BytesType.instance, true)), CellPath.create(bytes(UUIDGen.getTimeUUID())));
    return tests.stream().map(a -> new Object[] { a.getClass().getSimpleName() + ":" + (a.path() == null ? "simple" : "complex"), a }).collect(Collectors.toList());
}
Also used : TableMetadata(org.apache.cassandra.schema.TableMetadata) CellPath(org.apache.cassandra.db.rows.CellPath) ColumnMetadata(org.apache.cassandra.schema.ColumnMetadata) Assertions.assertThat(org.assertj.core.api.Assertions.assertThat) RunWith(org.junit.runner.RunWith) ByteBufferUtil.bytes(org.apache.cassandra.utils.ByteBufferUtil.bytes) ByteBuffer(java.nio.ByteBuffer) ArrayList(java.util.ArrayList) OpOrder(org.apache.cassandra.utils.concurrent.OpOrder) ListType(org.apache.cassandra.db.marshal.ListType) BufferCell(org.apache.cassandra.db.rows.BufferCell) BiConsumer(java.util.function.BiConsumer) Parameterized(org.junit.runners.Parameterized) ArrayCell(org.apache.cassandra.db.rows.ArrayCell) NativeCell(org.apache.cassandra.db.rows.NativeCell) Collection(java.util.Collection) BytesType(org.apache.cassandra.db.marshal.BytesType) Test(org.junit.Test) NativeAllocator(org.apache.cassandra.utils.memory.NativeAllocator) Collectors(java.util.stream.Collectors) UUIDGen(org.apache.cassandra.utils.UUIDGen) CellPath(org.apache.cassandra.db.rows.CellPath) List(java.util.List) Cell(org.apache.cassandra.db.rows.Cell) ImmediateFuture(org.apache.cassandra.utils.concurrent.ImmediateFuture) TableMetadata(org.apache.cassandra.schema.TableMetadata) ObjectSizes(org.apache.cassandra.utils.ObjectSizes) NativePool(org.apache.cassandra.utils.memory.NativePool) ArrayCell(org.apache.cassandra.db.rows.ArrayCell) ColumnMetadata(org.apache.cassandra.schema.ColumnMetadata) NativePool(org.apache.cassandra.utils.memory.NativePool) ArrayList(java.util.ArrayList) NativeCell(org.apache.cassandra.db.rows.NativeCell) BufferCell(org.apache.cassandra.db.rows.BufferCell) ByteBuffer(java.nio.ByteBuffer) OpOrder(org.apache.cassandra.utils.concurrent.OpOrder) NativeAllocator(org.apache.cassandra.utils.memory.NativeAllocator) BufferCell(org.apache.cassandra.db.rows.BufferCell) ArrayCell(org.apache.cassandra.db.rows.ArrayCell) NativeCell(org.apache.cassandra.db.rows.NativeCell) Cell(org.apache.cassandra.db.rows.Cell)

Example 69 with BiConsumer

use of java.util.function.BiConsumer in project neo4j by neo4j.

the class RecordLoadingTest method shouldReturnsFalseOnMissingToken.

@Test
void shouldReturnsFalseOnMissingToken() {
    // given
    NodeRecord entity = new NodeRecord(0);
    TokenHolder tokenHolder = new DelegatingTokenHolder(new ReadOnlyTokenCreator(), "Test");
    TokenStore<PropertyKeyTokenRecord> store = mock(TokenStore.class);
    BiConsumer noopReporter = mock(BiConsumer.class);
    // when
    boolean valid = RecordLoading.checkValidToken(entity, 0, tokenHolder, store, noopReporter, noopReporter, CursorContext.NULL);
    // then
    assertFalse(valid);
}
Also used : NodeRecord(org.neo4j.kernel.impl.store.record.NodeRecord) TokenHolder(org.neo4j.token.api.TokenHolder) DelegatingTokenHolder(org.neo4j.token.DelegatingTokenHolder) ReadOnlyTokenCreator(org.neo4j.token.ReadOnlyTokenCreator) DelegatingTokenHolder(org.neo4j.token.DelegatingTokenHolder) PropertyKeyTokenRecord(org.neo4j.kernel.impl.store.record.PropertyKeyTokenRecord) BiConsumer(java.util.function.BiConsumer) Test(org.junit.jupiter.api.Test)

Example 70 with BiConsumer

use of java.util.function.BiConsumer in project flink by apache.

the class ApplicationDispatcherBootstrapTest method testSubmitFailedJobOnApplicationError.

private void testSubmitFailedJobOnApplicationError(Configuration configuration, BiConsumer<JobID, Throwable> failedJobAssertion) throws Exception {
    final CompletableFuture<Void> submitted = new CompletableFuture<>();
    final TestingDispatcherGateway dispatcherGateway = TestingDispatcherGateway.newBuilder().setSubmitFailedFunction((jobId, jobName, t) -> {
        try {
            failedJobAssertion.accept(jobId, t);
            submitted.complete(null);
            return CompletableFuture.completedFuture(Acknowledge.get());
        } catch (Throwable assertion) {
            submitted.completeExceptionally(assertion);
            return FutureUtils.completedExceptionally(assertion);
        }
    }).setRequestJobStatusFunction(jobId -> submitted.thenApply(ignored -> JobStatus.FAILED)).setRequestJobResultFunction(jobId -> submitted.thenApply(ignored -> createJobResult(jobId, ApplicationStatus.FAILED))).build();
    final ApplicationDispatcherBootstrap bootstrap = new ApplicationDispatcherBootstrap(FailingJob.getProgram(), Collections.emptyList(), configuration, dispatcherGateway, scheduledExecutor, exception -> {
    });
    bootstrap.getBootstrapCompletionFuture().get();
}
Also used : CoreMatchers.is(org.hamcrest.CoreMatchers.is) ProgramInvocationException(org.apache.flink.client.program.ProgramInvocationException) ScheduledFuture(java.util.concurrent.ScheduledFuture) ExceptionUtils(org.apache.flink.util.ExceptionUtils) ExtendWith(org.junit.jupiter.api.extension.ExtendWith) Assertions.assertFalse(org.junit.jupiter.api.Assertions.assertFalse) Duration(java.time.Duration) Assertions(org.assertj.core.api.Assertions) FailingJob(org.apache.flink.client.testjar.FailingJob) ScheduledExecutor(org.apache.flink.util.concurrent.ScheduledExecutor) Acknowledge(org.apache.flink.runtime.messages.Acknowledge) Executors(java.util.concurrent.Executors) ExecutorUtils(org.apache.flink.util.ExecutorUtils) Test(org.junit.jupiter.api.Test) FlinkJobNotFoundException(org.apache.flink.runtime.messages.FlinkJobNotFoundException) Assertions.assertTrue(org.junit.jupiter.api.Assertions.assertTrue) SerializedThrowable(org.apache.flink.util.SerializedThrowable) Optional(java.util.Optional) PackagedProgram(org.apache.flink.client.program.PackagedProgram) Assertions.assertThrows(org.junit.jupiter.api.Assertions.assertThrows) Assertions.fail(org.junit.jupiter.api.Assertions.fail) FlinkException(org.apache.flink.util.FlinkException) ScheduledExecutorServiceAdapter(org.apache.flink.util.concurrent.ScheduledExecutorServiceAdapter) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) EnumSource(org.junit.jupiter.params.provider.EnumSource) CompletableFuture(java.util.concurrent.CompletableFuture) JobStatus(org.apache.flink.api.common.JobStatus) DispatcherGateway(org.apache.flink.runtime.dispatcher.DispatcherGateway) Supplier(java.util.function.Supplier) EmbeddedExecutor(org.apache.flink.client.deployment.application.executors.EmbeddedExecutor) PipelineOptionsInternal(org.apache.flink.configuration.PipelineOptionsInternal) MultiExecuteJob(org.apache.flink.client.testjar.MultiExecuteJob) JobResult(org.apache.flink.runtime.jobmaster.JobResult) TestLoggerExtension(org.apache.flink.util.TestLoggerExtension) FutureUtils(org.apache.flink.util.concurrent.FutureUtils) FatalErrorHandler(org.apache.flink.runtime.rpc.FatalErrorHandler) ScheduledExecutorService(java.util.concurrent.ScheduledExecutorService) BiConsumer(java.util.function.BiConsumer) DeploymentOptions(org.apache.flink.configuration.DeploymentOptions) MatcherAssert.assertThat(org.hamcrest.MatcherAssert.assertThat) Assertions.assertEquals(org.junit.jupiter.api.Assertions.assertEquals) JobExecutionException(org.apache.flink.runtime.client.JobExecutionException) HighAvailabilityMode(org.apache.flink.runtime.jobmanager.HighAvailabilityMode) FlinkRuntimeException(org.apache.flink.util.FlinkRuntimeException) ApplicationStatus(org.apache.flink.runtime.clusterframework.ApplicationStatus) Configuration(org.apache.flink.configuration.Configuration) JobCancellationException(org.apache.flink.runtime.client.JobCancellationException) ConcurrentLinkedDeque(java.util.concurrent.ConcurrentLinkedDeque) ExecutionException(java.util.concurrent.ExecutionException) TimeUnit(java.util.concurrent.TimeUnit) AfterEach(org.junit.jupiter.api.AfterEach) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest) JobID(org.apache.flink.api.common.JobID) TestingDispatcherGateway(org.apache.flink.runtime.webmonitor.TestingDispatcherGateway) Collections(java.util.Collections) HighAvailabilityOptions(org.apache.flink.configuration.HighAvailabilityOptions) DuplicateJobSubmissionException(org.apache.flink.runtime.client.DuplicateJobSubmissionException) CompletableFuture(java.util.concurrent.CompletableFuture) TestingDispatcherGateway(org.apache.flink.runtime.webmonitor.TestingDispatcherGateway) SerializedThrowable(org.apache.flink.util.SerializedThrowable)

Aggregations

BiConsumer (java.util.function.BiConsumer)255 Test (org.junit.Test)110 List (java.util.List)106 Map (java.util.Map)77 IOException (java.io.IOException)75 Consumer (java.util.function.Consumer)69 ArrayList (java.util.ArrayList)68 HashMap (java.util.HashMap)64 Collectors (java.util.stream.Collectors)53 CountDownLatch (java.util.concurrent.CountDownLatch)52 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)50 Collections (java.util.Collections)46 Set (java.util.Set)46 Collection (java.util.Collection)45 Arrays (java.util.Arrays)44 TimeUnit (java.util.concurrent.TimeUnit)43 Assert (org.junit.Assert)43 Function (java.util.function.Function)41 Optional (java.util.Optional)40 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)35