Search in sources :

Example 1 with ProducerIdsBlock

use of org.apache.kafka.server.common.ProducerIdsBlock in project kafka by apache.

the class ProducerIdControlManagerTest method testSnapshotIterator.

@Test
public void testSnapshotIterator() {
    ProducerIdsBlock range = null;
    for (int i = 0; i < 100; i++) {
        range = generateProducerIds(producerIdControlManager, i % 4, 100);
    }
    Iterator<List<ApiMessageAndVersion>> snapshotIterator = producerIdControlManager.iterator(Long.MAX_VALUE);
    assertTrue(snapshotIterator.hasNext());
    List<ApiMessageAndVersion> batch = snapshotIterator.next();
    assertEquals(1, batch.size(), "Producer IDs record batch should only contain a single record");
    assertEquals(range.firstProducerId() + range.size(), ((ProducerIdsRecord) batch.get(0).message()).nextProducerId());
    assertFalse(snapshotIterator.hasNext(), "Producer IDs iterator should only contain a single batch");
    ProducerIdControlManager newProducerIdManager = new ProducerIdControlManager(clusterControl, snapshotRegistry);
    snapshotIterator = producerIdControlManager.iterator(Long.MAX_VALUE);
    while (snapshotIterator.hasNext()) {
        snapshotIterator.next().forEach(message -> newProducerIdManager.replay((ProducerIdsRecord) message.message()));
    }
    // Verify that after reloading state from this "snapshot", we don't produce any overlapping IDs
    long lastProducerID = range.firstProducerId() + range.size() - 1;
    range = generateProducerIds(producerIdControlManager, 1, 100);
    assertTrue(range.firstProducerId() > lastProducerID);
}
Also used : ProducerIdsRecord(org.apache.kafka.common.metadata.ProducerIdsRecord) ApiMessageAndVersion(org.apache.kafka.server.common.ApiMessageAndVersion) ProducerIdsBlock(org.apache.kafka.server.common.ProducerIdsBlock) List(java.util.List) Test(org.junit.jupiter.api.Test)

Example 2 with ProducerIdsBlock

use of org.apache.kafka.server.common.ProducerIdsBlock in project kafka by apache.

the class ProducerIdControlManagerTest method testInitialResult.

@Test
public void testInitialResult() {
    ControllerResult<ProducerIdsBlock> result = producerIdControlManager.generateNextProducerId(1, 100);
    assertEquals(0, result.response().firstProducerId());
    assertEquals(1000, result.response().size());
    ProducerIdsRecord record = (ProducerIdsRecord) result.records().get(0).message();
    assertEquals(1000, record.nextProducerId());
}
Also used : ProducerIdsRecord(org.apache.kafka.common.metadata.ProducerIdsRecord) ProducerIdsBlock(org.apache.kafka.server.common.ProducerIdsBlock) Test(org.junit.jupiter.api.Test)

Example 3 with ProducerIdsBlock

use of org.apache.kafka.server.common.ProducerIdsBlock in project kafka by apache.

the class ProducerIdControlManagerTest method testMonotonic.

@Test
public void testMonotonic() {
    producerIdControlManager.replay(new ProducerIdsRecord().setBrokerId(1).setBrokerEpoch(100).setNextProducerId(42));
    ProducerIdsBlock range = producerIdControlManager.generateNextProducerId(1, 100).response();
    assertEquals(42, range.firstProducerId());
    // Can't go backwards in Producer IDs
    assertThrows(RuntimeException.class, () -> {
        producerIdControlManager.replay(new ProducerIdsRecord().setBrokerId(1).setBrokerEpoch(100).setNextProducerId(40));
    }, "Producer ID range must only increase");
    range = producerIdControlManager.generateNextProducerId(1, 100).response();
    assertEquals(42, range.firstProducerId());
    // Gaps in the ID range are okay.
    producerIdControlManager.replay(new ProducerIdsRecord().setBrokerId(1).setBrokerEpoch(100).setNextProducerId(50));
    range = producerIdControlManager.generateNextProducerId(1, 100).response();
    assertEquals(50, range.firstProducerId());
}
Also used : ProducerIdsRecord(org.apache.kafka.common.metadata.ProducerIdsRecord) ProducerIdsBlock(org.apache.kafka.server.common.ProducerIdsBlock) Test(org.junit.jupiter.api.Test)

Example 4 with ProducerIdsBlock

use of org.apache.kafka.server.common.ProducerIdsBlock in project kafka by apache.

the class ProducerIdControlManager method generateNextProducerId.

ControllerResult<ProducerIdsBlock> generateNextProducerId(int brokerId, long brokerEpoch) {
    clusterControlManager.checkBrokerEpoch(brokerId, brokerEpoch);
    long firstProducerIdInBlock = nextProducerId.get();
    if (firstProducerIdInBlock > Long.MAX_VALUE - ProducerIdsBlock.PRODUCER_ID_BLOCK_SIZE) {
        throw new UnknownServerException("Exhausted all producerIds as the next block's end producerId " + "has exceeded the int64 type limit");
    }
    ProducerIdsBlock block = new ProducerIdsBlock(brokerId, firstProducerIdInBlock, ProducerIdsBlock.PRODUCER_ID_BLOCK_SIZE);
    long newNextProducerId = block.nextBlockFirstId();
    ProducerIdsRecord record = new ProducerIdsRecord().setNextProducerId(newNextProducerId).setBrokerId(brokerId).setBrokerEpoch(brokerEpoch);
    return ControllerResult.of(Collections.singletonList(new ApiMessageAndVersion(record, (short) 0)), block);
}
Also used : ProducerIdsRecord(org.apache.kafka.common.metadata.ProducerIdsRecord) ApiMessageAndVersion(org.apache.kafka.server.common.ApiMessageAndVersion) ProducerIdsBlock(org.apache.kafka.server.common.ProducerIdsBlock) UnknownServerException(org.apache.kafka.common.errors.UnknownServerException)

Aggregations

ProducerIdsRecord (org.apache.kafka.common.metadata.ProducerIdsRecord)4 ProducerIdsBlock (org.apache.kafka.server.common.ProducerIdsBlock)4 Test (org.junit.jupiter.api.Test)3 ApiMessageAndVersion (org.apache.kafka.server.common.ApiMessageAndVersion)2 List (java.util.List)1 UnknownServerException (org.apache.kafka.common.errors.UnknownServerException)1