Search in sources :

Example 6 with Property

use of net.jqwik.api.Property in project kafka by apache.

the class RecordsIteratorTest method testFileRecords.

@Property
public void testFileRecords(@ForAll CompressionType compressionType, @ForAll long seed) throws IOException {
    List<TestBatch<String>> batches = createBatches(seed);
    MemoryRecords memRecords = buildRecords(compressionType, batches);
    FileRecords fileRecords = FileRecords.open(TestUtils.tempFile());
    fileRecords.append(memRecords);
    testIterator(batches, fileRecords);
}
Also used : FileRecords(org.apache.kafka.common.record.FileRecords) MemoryRecords(org.apache.kafka.common.record.MemoryRecords) Property(net.jqwik.api.Property)

Example 7 with Property

use of net.jqwik.api.Property in project kafka by apache.

the class RaftEventSimulationTest method canRecoverAfterAllNodesKilled.

@Property(tries = 100, afterFailure = AfterFailureMode.SAMPLE_ONLY)
void canRecoverAfterAllNodesKilled(@ForAll int seed, @ForAll @IntRange(min = 1, max = 5) int numVoters, @ForAll @IntRange(min = 0, max = 5) int numObservers) {
    Random random = new Random(seed);
    Cluster cluster = new Cluster(numVoters, numObservers, random);
    MessageRouter router = new MessageRouter(cluster);
    EventScheduler scheduler = schedulerWithDefaultInvariants(cluster);
    // Seed the cluster with some data
    cluster.startAll();
    schedulePolling(scheduler, cluster, 3, 5);
    scheduler.schedule(router::deliverAll, 0, 2, 1);
    scheduler.schedule(new SequentialAppendAction(cluster), 0, 2, 3);
    scheduler.runUntil(cluster::hasConsistentLeader);
    scheduler.runUntil(() -> cluster.anyReachedHighWatermark(10));
    long highWatermark = cluster.maxHighWatermarkReached();
    // We kill all of the nodes. Then we bring back a majority and verify that
    // they are able to elect a leader and continue making progress
    cluster.killAll();
    Iterator<Integer> nodeIdsIterator = cluster.nodes().iterator();
    for (int i = 0; i < cluster.majoritySize(); i++) {
        Integer nodeId = nodeIdsIterator.next();
        cluster.start(nodeId);
    }
    scheduler.runUntil(() -> cluster.allReachedHighWatermark(highWatermark + 10));
}
Also used : AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Random(java.util.Random) Property(net.jqwik.api.Property)

Example 8 with Property

use of net.jqwik.api.Property in project kafka by apache.

the class RaftEventSimulationTest method canElectNewLeaderAfterOldLeaderPartitionedAway.

@Property(tries = 100, afterFailure = AfterFailureMode.SAMPLE_ONLY)
void canElectNewLeaderAfterOldLeaderPartitionedAway(@ForAll int seed, @ForAll @IntRange(min = 3, max = 5) int numVoters, @ForAll @IntRange(min = 0, max = 5) int numObservers) {
    Random random = new Random(seed);
    Cluster cluster = new Cluster(numVoters, numObservers, random);
    MessageRouter router = new MessageRouter(cluster);
    EventScheduler scheduler = schedulerWithDefaultInvariants(cluster);
    // Seed the cluster with some data
    cluster.startAll();
    schedulePolling(scheduler, cluster, 3, 5);
    scheduler.schedule(router::deliverAll, 0, 2, 2);
    scheduler.schedule(new SequentialAppendAction(cluster), 0, 2, 3);
    scheduler.runUntil(cluster::hasConsistentLeader);
    scheduler.runUntil(() -> cluster.anyReachedHighWatermark(10));
    // The leader gets partitioned off. We can verify the new leader has been elected
    // by writing some data and ensuring that it gets replicated
    int leaderId = cluster.latestLeader().orElseThrow(() -> new AssertionError("Failed to find current leader"));
    router.filter(leaderId, new DropAllTraffic());
    Set<Integer> nonPartitionedNodes = new HashSet<>(cluster.nodes());
    nonPartitionedNodes.remove(leaderId);
    scheduler.runUntil(() -> cluster.allReachedHighWatermark(20, nonPartitionedNodes));
}
Also used : AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Random(java.util.Random) HashSet(java.util.HashSet) Property(net.jqwik.api.Property)

Example 9 with Property

use of net.jqwik.api.Property in project kafka by apache.

the class RaftEventSimulationTest method canElectNewLeaderAfterOldLeaderFailure.

@Property(tries = 100, afterFailure = AfterFailureMode.SAMPLE_ONLY)
void canElectNewLeaderAfterOldLeaderFailure(@ForAll int seed, @ForAll @IntRange(min = 3, max = 5) int numVoters, @ForAll @IntRange(min = 0, max = 5) int numObservers, @ForAll boolean isGracefulShutdown) {
    Random random = new Random(seed);
    Cluster cluster = new Cluster(numVoters, numObservers, random);
    MessageRouter router = new MessageRouter(cluster);
    EventScheduler scheduler = schedulerWithDefaultInvariants(cluster);
    // Seed the cluster with some data
    cluster.startAll();
    schedulePolling(scheduler, cluster, 3, 5);
    scheduler.schedule(router::deliverAll, 0, 2, 1);
    scheduler.schedule(new SequentialAppendAction(cluster), 0, 2, 3);
    scheduler.runUntil(cluster::hasConsistentLeader);
    scheduler.runUntil(() -> cluster.anyReachedHighWatermark(10));
    // Shutdown the leader and write some more data. We can verify the new leader has been elected
    // by verifying that the high watermark can still advance.
    int leaderId = cluster.latestLeader().orElseThrow(() -> new AssertionError("Failed to find current leader"));
    if (isGracefulShutdown) {
        cluster.shutdown(leaderId);
    } else {
        cluster.kill(leaderId);
    }
    scheduler.runUntil(() -> cluster.allReachedHighWatermark(20));
    long highWatermark = cluster.maxHighWatermarkReached();
    // Restart the node and verify it catches up
    cluster.start(leaderId);
    scheduler.runUntil(() -> cluster.allReachedHighWatermark(highWatermark + 10));
}
Also used : Random(java.util.Random) Property(net.jqwik.api.Property)

Aggregations

Property (net.jqwik.api.Property)9 Random (java.util.Random)7 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)2 MemoryRecords (org.apache.kafka.common.record.MemoryRecords)2 HashSet (java.util.HashSet)1 FileRecords (org.apache.kafka.common.record.FileRecords)1