Search in sources :

Example 41 with ApiMessageAndVersion

use of org.apache.kafka.server.common.ApiMessageAndVersion in project kafka by apache.

the class LocalLogManager method scheduleAppend.

@Override
public long scheduleAppend(int epoch, List<ApiMessageAndVersion> batch) {
    if (batch.isEmpty()) {
        throw new IllegalArgumentException("Batch cannot be empty");
    }
    List<ApiMessageAndVersion> first = batch.subList(0, batch.size() / 2);
    List<ApiMessageAndVersion> second = batch.subList(batch.size() / 2, batch.size());
    assertEquals(batch.size(), first.size() + second.size());
    assertFalse(second.isEmpty());
    OptionalLong firstOffset = first.stream().mapToLong(record -> scheduleAtomicAppend(epoch, Collections.singletonList(record))).max();
    if (firstOffset.isPresent() && resignAfterNonAtomicCommit.getAndSet(false)) {
        // Emulate losing leadership in the middle of a non-atomic append by not writing
        // the rest of the batch and instead writing a leader change message
        resign(leader.epoch());
        return firstOffset.getAsLong() + second.size();
    } else {
        return second.stream().mapToLong(record -> scheduleAtomicAppend(epoch, Collections.singletonList(record))).max().getAsLong();
    }
}
Also used : IntStream(java.util.stream.IntStream) MockTime(org.apache.kafka.common.utils.MockTime) MockRawSnapshotWriter(org.apache.kafka.snapshot.MockRawSnapshotWriter) MemoryBatchReader(org.apache.kafka.raft.internals.MemoryBatchReader) LoggerFactory(org.slf4j.LoggerFactory) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) HashMap(java.util.HashMap) CompletableFuture(java.util.concurrent.CompletableFuture) SimpleImmutableEntry(java.util.AbstractMap.SimpleImmutableEntry) RecordsSnapshotWriter(org.apache.kafka.snapshot.RecordsSnapshotWriter) OptionalInt(java.util.OptionalInt) MockRawSnapshotReader(org.apache.kafka.snapshot.MockRawSnapshotReader) OptionalLong(java.util.OptionalLong) MemoryPool(org.apache.kafka.common.memory.MemoryPool) SnapshotWriter(org.apache.kafka.snapshot.SnapshotWriter) Assertions.assertFalse(org.junit.jupiter.api.Assertions.assertFalse) BufferSupplier(org.apache.kafka.common.utils.BufferSupplier) KafkaEventQueue(org.apache.kafka.queue.KafkaEventQueue) LogContext(org.apache.kafka.common.utils.LogContext) RawSnapshotWriter(org.apache.kafka.snapshot.RawSnapshotWriter) Map(java.util.Map) ThreadLocalRandom(java.util.concurrent.ThreadLocalRandom) ApiMessageAndVersion(org.apache.kafka.server.common.ApiMessageAndVersion) Assertions.assertEquals(org.junit.jupiter.api.Assertions.assertEquals) SnapshotReader(org.apache.kafka.snapshot.SnapshotReader) OffsetAndEpoch(org.apache.kafka.raft.OffsetAndEpoch) RecordsSnapshotReader(org.apache.kafka.snapshot.RecordsSnapshotReader) MetadataRecordSerde(org.apache.kafka.metadata.MetadataRecordSerde) CompressionType(org.apache.kafka.common.record.CompressionType) RawSnapshotReader(org.apache.kafka.snapshot.RawSnapshotReader) Logger(org.slf4j.Logger) IdentityHashMap(java.util.IdentityHashMap) Time(org.apache.kafka.common.utils.Time) Iterator(java.util.Iterator) NavigableMap(java.util.NavigableMap) Collectors(java.util.stream.Collectors) Batch(org.apache.kafka.raft.Batch) Objects(java.util.Objects) ExecutionException(java.util.concurrent.ExecutionException) List(java.util.List) ObjectSerializationCache(org.apache.kafka.common.protocol.ObjectSerializationCache) TreeMap(java.util.TreeMap) EventQueue(org.apache.kafka.queue.EventQueue) Entry(java.util.Map.Entry) Optional(java.util.Optional) LeaderAndEpoch(org.apache.kafka.raft.LeaderAndEpoch) RaftClient(org.apache.kafka.raft.RaftClient) Collections(java.util.Collections) ApiMessageAndVersion(org.apache.kafka.server.common.ApiMessageAndVersion) OptionalLong(java.util.OptionalLong)

Example 42 with ApiMessageAndVersion

use of org.apache.kafka.server.common.ApiMessageAndVersion in project kafka by apache.

the class RemoteLogMetadataTransformTest method testRemoteLogSegmentMetadataUpdateTransform.

@Test
public void testRemoteLogSegmentMetadataUpdateTransform() {
    RemoteLogSegmentMetadataUpdateTransform metadataUpdateTransform = new RemoteLogSegmentMetadataUpdateTransform();
    RemoteLogSegmentMetadataUpdate metadataUpdate = new RemoteLogSegmentMetadataUpdate(new RemoteLogSegmentId(TP0, Uuid.randomUuid()), time.milliseconds(), RemoteLogSegmentState.COPY_SEGMENT_FINISHED, 1);
    ApiMessageAndVersion apiMessageAndVersion = metadataUpdateTransform.toApiMessageAndVersion(metadataUpdate);
    RemoteLogSegmentMetadataUpdate metadataUpdateFromRecord = metadataUpdateTransform.fromApiMessageAndVersion(apiMessageAndVersion);
    Assertions.assertEquals(metadataUpdate, metadataUpdateFromRecord);
}
Also used : RemoteLogSegmentMetadataUpdate(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate) ApiMessageAndVersion(org.apache.kafka.server.common.ApiMessageAndVersion) RemoteLogSegmentMetadataUpdateTransform(org.apache.kafka.server.log.remote.metadata.storage.serialization.RemoteLogSegmentMetadataUpdateTransform) RemoteLogSegmentId(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId) Test(org.junit.jupiter.api.Test)

Example 43 with ApiMessageAndVersion

use of org.apache.kafka.server.common.ApiMessageAndVersion in project kafka by apache.

the class RemoteLogMetadataTransformTest method testRemoteLogSegmentMetadataTransform.

@Test
public void testRemoteLogSegmentMetadataTransform() {
    RemoteLogSegmentMetadataTransform metadataTransform = new RemoteLogSegmentMetadataTransform();
    RemoteLogSegmentMetadata metadata = createRemoteLogSegmentMetadata();
    ApiMessageAndVersion apiMessageAndVersion = metadataTransform.toApiMessageAndVersion(metadata);
    RemoteLogSegmentMetadata remoteLogSegmentMetadataFromRecord = metadataTransform.fromApiMessageAndVersion(apiMessageAndVersion);
    Assertions.assertEquals(metadata, remoteLogSegmentMetadataFromRecord);
}
Also used : RemoteLogSegmentMetadataTransform(org.apache.kafka.server.log.remote.metadata.storage.serialization.RemoteLogSegmentMetadataTransform) ApiMessageAndVersion(org.apache.kafka.server.common.ApiMessageAndVersion) RemoteLogSegmentMetadata(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata) Test(org.junit.jupiter.api.Test)

Example 44 with ApiMessageAndVersion

use of org.apache.kafka.server.common.ApiMessageAndVersion in project kafka by apache.

the class RemoteLogMetadataTransformTest method testRemoteLogPartitionMetadataTransform.

@Test
public void testRemoteLogPartitionMetadataTransform() {
    RemotePartitionDeleteMetadataTransform transform = new RemotePartitionDeleteMetadataTransform();
    RemotePartitionDeleteMetadata partitionDeleteMetadata = new RemotePartitionDeleteMetadata(TP0, RemotePartitionDeleteState.DELETE_PARTITION_STARTED, time.milliseconds(), 1);
    ApiMessageAndVersion apiMessageAndVersion = transform.toApiMessageAndVersion(partitionDeleteMetadata);
    RemotePartitionDeleteMetadata partitionDeleteMetadataFromRecord = transform.fromApiMessageAndVersion(apiMessageAndVersion);
    Assertions.assertEquals(partitionDeleteMetadata, partitionDeleteMetadataFromRecord);
}
Also used : RemotePartitionDeleteMetadataTransform(org.apache.kafka.server.log.remote.metadata.storage.serialization.RemotePartitionDeleteMetadataTransform) ApiMessageAndVersion(org.apache.kafka.server.common.ApiMessageAndVersion) RemotePartitionDeleteMetadata(org.apache.kafka.server.log.remote.storage.RemotePartitionDeleteMetadata) Test(org.junit.jupiter.api.Test)

Example 45 with ApiMessageAndVersion

use of org.apache.kafka.server.common.ApiMessageAndVersion in project kafka by apache.

the class QuorumControllerTest method testSnapshotConfiguration.

@Test
public void testSnapshotConfiguration() throws Throwable {
    final int numBrokers = 4;
    final int maxNewRecordBytes = 4;
    Map<Integer, Long> brokerEpochs = new HashMap<>();
    Uuid fooId;
    try (LocalLogManagerTestEnv logEnv = new LocalLogManagerTestEnv(3, Optional.empty())) {
        try (QuorumControllerTestEnv controlEnv = new QuorumControllerTestEnv(logEnv, builder -> {
            builder.setConfigDefs(CONFIGS).setSnapshotMaxNewRecordBytes(maxNewRecordBytes);
        })) {
            QuorumController active = controlEnv.activeController();
            for (int i = 0; i < numBrokers; i++) {
                BrokerRegistrationReply reply = active.registerBroker(new BrokerRegistrationRequestData().setBrokerId(i).setRack(null).setClusterId(active.clusterId()).setIncarnationId(Uuid.fromString("kxAT73dKQsitIedpiPtwB" + i)).setListeners(new ListenerCollection(Arrays.asList(new Listener().setName("PLAINTEXT").setHost("localhost").setPort(9092 + i)).iterator()))).get();
                brokerEpochs.put(i, reply.epoch());
            }
            for (int i = 0; i < numBrokers - 1; i++) {
                assertEquals(new BrokerHeartbeatReply(true, false, false, false), active.processBrokerHeartbeat(new BrokerHeartbeatRequestData().setWantFence(false).setBrokerEpoch(brokerEpochs.get(i)).setBrokerId(i).setCurrentMetadataOffset(100000L)).get());
            }
            CreateTopicsResponseData fooData = active.createTopics(new CreateTopicsRequestData().setTopics(new CreatableTopicCollection(Collections.singleton(new CreatableTopic().setName("foo").setNumPartitions(-1).setReplicationFactor((short) -1).setAssignments(new CreatableReplicaAssignmentCollection(Arrays.asList(new CreatableReplicaAssignment().setPartitionIndex(0).setBrokerIds(Arrays.asList(0, 1, 2)), new CreatableReplicaAssignment().setPartitionIndex(1).setBrokerIds(Arrays.asList(1, 2, 0))).iterator()))).iterator()))).get();
            fooId = fooData.topics().find("foo").topicId();
            active.allocateProducerIds(new AllocateProducerIdsRequestData().setBrokerId(0).setBrokerEpoch(brokerEpochs.get(0))).get();
            SnapshotReader<ApiMessageAndVersion> snapshot = createSnapshotReader(logEnv.waitForLatestSnapshot());
            checkSnapshotSubcontent(expectedSnapshotContent(fooId, brokerEpochs), snapshot);
        }
    }
}
Also used : BrokerHeartbeatReply(org.apache.kafka.metadata.BrokerHeartbeatReply) ListenerCollection(org.apache.kafka.common.message.BrokerRegistrationRequestData.ListenerCollection) LocalLogManagerTestEnv(org.apache.kafka.metalog.LocalLogManagerTestEnv) Listener(org.apache.kafka.common.message.BrokerRegistrationRequestData.Listener) HashMap(java.util.HashMap) BrokerRegistrationRequestData(org.apache.kafka.common.message.BrokerRegistrationRequestData) BrokerRegistrationReply(org.apache.kafka.metadata.BrokerRegistrationReply) CreateTopicsResponseData(org.apache.kafka.common.message.CreateTopicsResponseData) BrokerEndpoint(org.apache.kafka.common.metadata.RegisterBrokerRecord.BrokerEndpoint) CreatableTopicCollection(org.apache.kafka.common.message.CreateTopicsRequestData.CreatableTopicCollection) Uuid(org.apache.kafka.common.Uuid) BrokerHeartbeatRequestData(org.apache.kafka.common.message.BrokerHeartbeatRequestData) CreatableTopic(org.apache.kafka.common.message.CreateTopicsRequestData.CreatableTopic) CreatableReplicaAssignment(org.apache.kafka.common.message.CreateTopicsRequestData.CreatableReplicaAssignment) CreateTopicsRequestData(org.apache.kafka.common.message.CreateTopicsRequestData) ApiMessageAndVersion(org.apache.kafka.server.common.ApiMessageAndVersion) AllocateProducerIdsRequestData(org.apache.kafka.common.message.AllocateProducerIdsRequestData) CreatableReplicaAssignmentCollection(org.apache.kafka.common.message.CreateTopicsRequestData.CreatableReplicaAssignmentCollection) Test(org.junit.jupiter.api.Test)

Aggregations

ApiMessageAndVersion (org.apache.kafka.server.common.ApiMessageAndVersion)84 ArrayList (java.util.ArrayList)38 Test (org.junit.jupiter.api.Test)35 Uuid (org.apache.kafka.common.Uuid)23 ApiError (org.apache.kafka.common.requests.ApiError)20 LogContext (org.apache.kafka.common.utils.LogContext)17 HashMap (java.util.HashMap)16 SnapshotRegistry (org.apache.kafka.timeline.SnapshotRegistry)15 List (java.util.List)12 Map (java.util.Map)12 PartitionChangeRecord (org.apache.kafka.common.metadata.PartitionChangeRecord)12 PartitionRegistration (org.apache.kafka.metadata.PartitionRegistration)11 TopicRecord (org.apache.kafka.common.metadata.TopicRecord)8 UnknownTopicOrPartitionException (org.apache.kafka.common.errors.UnknownTopicOrPartitionException)7 AlterIsrRequestData (org.apache.kafka.common.message.AlterIsrRequestData)7 Collections (java.util.Collections)6 Iterator (java.util.Iterator)6 Entry (java.util.Map.Entry)6 NoSuchElementException (java.util.NoSuchElementException)6 Optional (java.util.Optional)6