Search in sources :

Example 6 with RemoteLogSegmentMetadataUpdate

use of org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate in project kafka by apache.

the class RemoteLogMetadataTransformTest method testRemoteLogSegmentMetadataUpdateTransform.

@Test
public void testRemoteLogSegmentMetadataUpdateTransform() {
    RemoteLogSegmentMetadataUpdateTransform metadataUpdateTransform = new RemoteLogSegmentMetadataUpdateTransform();
    RemoteLogSegmentMetadataUpdate metadataUpdate = new RemoteLogSegmentMetadataUpdate(new RemoteLogSegmentId(TP0, Uuid.randomUuid()), time.milliseconds(), RemoteLogSegmentState.COPY_SEGMENT_FINISHED, 1);
    ApiMessageAndVersion apiMessageAndVersion = metadataUpdateTransform.toApiMessageAndVersion(metadataUpdate);
    RemoteLogSegmentMetadataUpdate metadataUpdateFromRecord = metadataUpdateTransform.fromApiMessageAndVersion(apiMessageAndVersion);
    Assertions.assertEquals(metadataUpdate, metadataUpdateFromRecord);
}
Also used : RemoteLogSegmentMetadataUpdate(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate) ApiMessageAndVersion(org.apache.kafka.server.common.ApiMessageAndVersion) RemoteLogSegmentMetadataUpdateTransform(org.apache.kafka.server.log.remote.metadata.storage.serialization.RemoteLogSegmentMetadataUpdateTransform) RemoteLogSegmentId(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId) Test(org.junit.jupiter.api.Test)

Example 7 with RemoteLogSegmentMetadataUpdate

use of org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate in project kafka by apache.

the class RemoteLogSegmentLifecycleTest method testCacheSegmentsWithDeleteSegmentFinishedState.

@ParameterizedTest(name = "remoteLogSegmentLifecycleManager = {0}")
@MethodSource("remoteLogSegmentLifecycleManagers")
public void testCacheSegmentsWithDeleteSegmentFinishedState(RemoteLogSegmentLifecycleManager remoteLogSegmentLifecycleManager) throws Exception {
    try {
        remoteLogSegmentLifecycleManager.initialize(topicIdPartition);
        // Create a segment and move it to state DELETE_SEGMENT_FINISHED, and check for searching that segment and
        // listing the segments.
        RemoteLogSegmentMetadata segmentMetadata = createSegmentUpdateWithState(remoteLogSegmentLifecycleManager, Collections.singletonMap(0, 301L), 301L, 400L, RemoteLogSegmentState.DELETE_SEGMENT_STARTED);
        // Search should not return the above segment as their leader epoch state is cleared.
        Assertions.assertFalse(remoteLogSegmentLifecycleManager.remoteLogSegmentMetadata(0, 350).isPresent());
        RemoteLogSegmentMetadataUpdate segmentMetadataUpdate = new RemoteLogSegmentMetadataUpdate(segmentMetadata.remoteLogSegmentId(), time.milliseconds(), RemoteLogSegmentState.DELETE_SEGMENT_FINISHED, BROKER_ID_1);
        remoteLogSegmentLifecycleManager.updateRemoteLogSegmentMetadata(segmentMetadataUpdate);
        // listRemoteLogSegments(0) and listRemoteLogSegments() should not contain the above segment.
        Assertions.assertFalse(remoteLogSegmentLifecycleManager.listRemoteLogSegments(0).hasNext());
        Assertions.assertFalse(remoteLogSegmentLifecycleManager.listAllRemoteLogSegments().hasNext());
    } finally {
        Utils.closeQuietly(remoteLogSegmentLifecycleManager, "RemoteLogSegmentLifecycleManager");
    }
}
Also used : RemoteLogSegmentMetadataUpdate(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate) RemoteLogSegmentMetadata(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest) MethodSource(org.junit.jupiter.params.provider.MethodSource)

Example 8 with RemoteLogSegmentMetadataUpdate

use of org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate in project kafka by apache.

the class RemoteLogSegmentLifecycleTest method createSegmentUpdateWithState.

private RemoteLogSegmentMetadata createSegmentUpdateWithState(RemoteLogSegmentLifecycleManager remoteLogSegmentLifecycleManager, Map<Integer, Long> segmentLeaderEpochs, long startOffset, long endOffset, RemoteLogSegmentState state) throws RemoteStorageException {
    RemoteLogSegmentId segmentId = new RemoteLogSegmentId(topicIdPartition, Uuid.randomUuid());
    RemoteLogSegmentMetadata segmentMetadata = new RemoteLogSegmentMetadata(segmentId, startOffset, endOffset, -1L, BROKER_ID_0, time.milliseconds(), SEG_SIZE, segmentLeaderEpochs);
    remoteLogSegmentLifecycleManager.addRemoteLogSegmentMetadata(segmentMetadata);
    RemoteLogSegmentMetadataUpdate segMetadataUpdate = new RemoteLogSegmentMetadataUpdate(segmentId, time.milliseconds(), state, BROKER_ID_1);
    remoteLogSegmentLifecycleManager.updateRemoteLogSegmentMetadata(segMetadataUpdate);
    return segmentMetadata.createWithUpdates(segMetadataUpdate);
}
Also used : RemoteLogSegmentMetadataUpdate(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate) RemoteLogSegmentId(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId) RemoteLogSegmentMetadata(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata)

Example 9 with RemoteLogSegmentMetadataUpdate

use of org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate in project kafka by apache.

the class RemoteLogSegmentMetadataUpdateTransform method fromApiMessageAndVersion.

public RemoteLogSegmentMetadataUpdate fromApiMessageAndVersion(ApiMessageAndVersion apiMessageAndVersion) {
    RemoteLogSegmentMetadataUpdateRecord record = (RemoteLogSegmentMetadataUpdateRecord) apiMessageAndVersion.message();
    RemoteLogSegmentMetadataUpdateRecord.RemoteLogSegmentIdEntry entry = record.remoteLogSegmentId();
    TopicIdPartition topicIdPartition = new TopicIdPartition(entry.topicIdPartition().id(), new TopicPartition(entry.topicIdPartition().name(), entry.topicIdPartition().partition()));
    return new RemoteLogSegmentMetadataUpdate(new RemoteLogSegmentId(topicIdPartition, entry.id()), record.eventTimestampMs(), RemoteLogSegmentState.forId(record.remoteLogSegmentState()), record.brokerId());
}
Also used : RemoteLogSegmentMetadataUpdate(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate) TopicPartition(org.apache.kafka.common.TopicPartition) TopicIdPartition(org.apache.kafka.common.TopicIdPartition) RemoteLogSegmentId(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId) RemoteLogSegmentMetadataUpdateRecord(org.apache.kafka.server.log.remote.metadata.storage.generated.RemoteLogSegmentMetadataUpdateRecord)

Example 10 with RemoteLogSegmentMetadataUpdate

use of org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate in project kafka by apache.

the class RemoteLogSegmentMetadataTransform method fromApiMessageAndVersion.

@Override
public RemoteLogSegmentMetadata fromApiMessageAndVersion(ApiMessageAndVersion apiMessageAndVersion) {
    RemoteLogSegmentMetadataRecord record = (RemoteLogSegmentMetadataRecord) apiMessageAndVersion.message();
    RemoteLogSegmentId remoteLogSegmentId = buildRemoteLogSegmentId(record.remoteLogSegmentId());
    Map<Integer, Long> segmentLeaderEpochs = new HashMap<>();
    for (RemoteLogSegmentMetadataRecord.SegmentLeaderEpochEntry segmentLeaderEpoch : record.segmentLeaderEpochs()) {
        segmentLeaderEpochs.put(segmentLeaderEpoch.leaderEpoch(), segmentLeaderEpoch.offset());
    }
    RemoteLogSegmentMetadata remoteLogSegmentMetadata = new RemoteLogSegmentMetadata(remoteLogSegmentId, record.startOffset(), record.endOffset(), record.maxTimestampMs(), record.brokerId(), record.eventTimestampMs(), record.segmentSizeInBytes(), segmentLeaderEpochs);
    RemoteLogSegmentMetadataUpdate rlsmUpdate = new RemoteLogSegmentMetadataUpdate(remoteLogSegmentId, record.eventTimestampMs(), RemoteLogSegmentState.forId(record.remoteLogSegmentState()), record.brokerId());
    return remoteLogSegmentMetadata.createWithUpdates(rlsmUpdate);
}
Also used : HashMap(java.util.HashMap) RemoteLogSegmentMetadataUpdate(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate) RemoteLogSegmentMetadataRecord(org.apache.kafka.server.log.remote.metadata.storage.generated.RemoteLogSegmentMetadataRecord) RemoteLogSegmentId(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId) RemoteLogSegmentMetadata(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata)

Aggregations

RemoteLogSegmentMetadataUpdate (org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate)10 RemoteLogSegmentId (org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId)8 RemoteLogSegmentMetadata (org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata)7 Test (org.junit.jupiter.api.Test)4 HashMap (java.util.HashMap)2 TopicIdPartition (org.apache.kafka.common.TopicIdPartition)2 TopicPartition (org.apache.kafka.common.TopicPartition)2 ParameterizedTest (org.junit.jupiter.params.ParameterizedTest)2 MethodSource (org.junit.jupiter.params.provider.MethodSource)2 Path (java.nio.file.Path)1 Map (java.util.Map)1 ApiMessageAndVersion (org.apache.kafka.server.common.ApiMessageAndVersion)1 RemoteLogSegmentMetadataRecord (org.apache.kafka.server.log.remote.metadata.storage.generated.RemoteLogSegmentMetadataRecord)1 RemoteLogSegmentMetadataUpdateRecord (org.apache.kafka.server.log.remote.metadata.storage.generated.RemoteLogSegmentMetadataUpdateRecord)1 RemoteLogSegmentMetadataUpdateTransform (org.apache.kafka.server.log.remote.metadata.storage.serialization.RemoteLogSegmentMetadataUpdateTransform)1 RemoteLogSegmentState (org.apache.kafka.server.log.remote.storage.RemoteLogSegmentState)1 RemoteStorageException (org.apache.kafka.server.log.remote.storage.RemoteStorageException)1