Search in sources :

Example 11 with RemoteLogSegmentMetadata

use of org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata in project kafka by apache.

the class RemoteLogMetadataTransformTest method testRemoteLogSegmentMetadataTransform.

@Test
public void testRemoteLogSegmentMetadataTransform() {
    RemoteLogSegmentMetadataTransform metadataTransform = new RemoteLogSegmentMetadataTransform();
    RemoteLogSegmentMetadata metadata = createRemoteLogSegmentMetadata();
    ApiMessageAndVersion apiMessageAndVersion = metadataTransform.toApiMessageAndVersion(metadata);
    RemoteLogSegmentMetadata remoteLogSegmentMetadataFromRecord = metadataTransform.fromApiMessageAndVersion(apiMessageAndVersion);
    Assertions.assertEquals(metadata, remoteLogSegmentMetadataFromRecord);
}
Also used : RemoteLogSegmentMetadataTransform(org.apache.kafka.server.log.remote.metadata.storage.serialization.RemoteLogSegmentMetadataTransform) ApiMessageAndVersion(org.apache.kafka.server.common.ApiMessageAndVersion) RemoteLogSegmentMetadata(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata) Test(org.junit.jupiter.api.Test)

Example 12 with RemoteLogSegmentMetadata

use of org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata in project kafka by apache.

the class RemoteLogSegmentLifecycleTest method testCacheSegmentWithCopySegmentStartedState.

@ParameterizedTest(name = "remoteLogSegmentLifecycleManager = {0}")
@MethodSource("remoteLogSegmentLifecycleManagers")
public void testCacheSegmentWithCopySegmentStartedState(RemoteLogSegmentLifecycleManager remoteLogSegmentLifecycleManager) throws Exception {
    try {
        remoteLogSegmentLifecycleManager.initialize(topicIdPartition);
        // Create a segment with state COPY_SEGMENT_STARTED, and check for searching that segment and listing the
        // segments.
        RemoteLogSegmentId segmentId = new RemoteLogSegmentId(topicIdPartition, Uuid.randomUuid());
        RemoteLogSegmentMetadata segmentMetadata = new RemoteLogSegmentMetadata(segmentId, 0L, 50L, -1L, BROKER_ID_0, time.milliseconds(), SEG_SIZE, Collections.singletonMap(0, 0L));
        remoteLogSegmentLifecycleManager.addRemoteLogSegmentMetadata(segmentMetadata);
        // This segment should not be available as the state is not reached to COPY_SEGMENT_FINISHED.
        Optional<RemoteLogSegmentMetadata> segMetadataForOffset0Epoch0 = remoteLogSegmentLifecycleManager.remoteLogSegmentMetadata(0, 0);
        Assertions.assertFalse(segMetadataForOffset0Epoch0.isPresent());
        // cache.listRemoteLogSegments APIs should contain the above segment.
        checkListSegments(remoteLogSegmentLifecycleManager, 0, segmentMetadata);
    } finally {
        Utils.closeQuietly(remoteLogSegmentLifecycleManager, "RemoteLogSegmentLifecycleManager");
    }
}
Also used : RemoteLogSegmentId(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId) RemoteLogSegmentMetadata(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest) MethodSource(org.junit.jupiter.params.provider.MethodSource)

Example 13 with RemoteLogSegmentMetadata

use of org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata in project kafka by apache.

the class RemoteLogSegmentLifecycleTest method testCacheSegmentsWithDeleteSegmentFinishedState.

@ParameterizedTest(name = "remoteLogSegmentLifecycleManager = {0}")
@MethodSource("remoteLogSegmentLifecycleManagers")
public void testCacheSegmentsWithDeleteSegmentFinishedState(RemoteLogSegmentLifecycleManager remoteLogSegmentLifecycleManager) throws Exception {
    try {
        remoteLogSegmentLifecycleManager.initialize(topicIdPartition);
        // Create a segment and move it to state DELETE_SEGMENT_FINISHED, and check for searching that segment and
        // listing the segments.
        RemoteLogSegmentMetadata segmentMetadata = createSegmentUpdateWithState(remoteLogSegmentLifecycleManager, Collections.singletonMap(0, 301L), 301L, 400L, RemoteLogSegmentState.DELETE_SEGMENT_STARTED);
        // Search should not return the above segment as their leader epoch state is cleared.
        Assertions.assertFalse(remoteLogSegmentLifecycleManager.remoteLogSegmentMetadata(0, 350).isPresent());
        RemoteLogSegmentMetadataUpdate segmentMetadataUpdate = new RemoteLogSegmentMetadataUpdate(segmentMetadata.remoteLogSegmentId(), time.milliseconds(), RemoteLogSegmentState.DELETE_SEGMENT_FINISHED, BROKER_ID_1);
        remoteLogSegmentLifecycleManager.updateRemoteLogSegmentMetadata(segmentMetadataUpdate);
        // listRemoteLogSegments(0) and listRemoteLogSegments() should not contain the above segment.
        Assertions.assertFalse(remoteLogSegmentLifecycleManager.listRemoteLogSegments(0).hasNext());
        Assertions.assertFalse(remoteLogSegmentLifecycleManager.listAllRemoteLogSegments().hasNext());
    } finally {
        Utils.closeQuietly(remoteLogSegmentLifecycleManager, "RemoteLogSegmentLifecycleManager");
    }
}
Also used : RemoteLogSegmentMetadataUpdate(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate) RemoteLogSegmentMetadata(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest) MethodSource(org.junit.jupiter.params.provider.MethodSource)

Example 14 with RemoteLogSegmentMetadata

use of org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata in project kafka by apache.

the class RemoteLogSegmentLifecycleTest method createSegmentUpdateWithState.

private RemoteLogSegmentMetadata createSegmentUpdateWithState(RemoteLogSegmentLifecycleManager remoteLogSegmentLifecycleManager, Map<Integer, Long> segmentLeaderEpochs, long startOffset, long endOffset, RemoteLogSegmentState state) throws RemoteStorageException {
    RemoteLogSegmentId segmentId = new RemoteLogSegmentId(topicIdPartition, Uuid.randomUuid());
    RemoteLogSegmentMetadata segmentMetadata = new RemoteLogSegmentMetadata(segmentId, startOffset, endOffset, -1L, BROKER_ID_0, time.milliseconds(), SEG_SIZE, segmentLeaderEpochs);
    remoteLogSegmentLifecycleManager.addRemoteLogSegmentMetadata(segmentMetadata);
    RemoteLogSegmentMetadataUpdate segMetadataUpdate = new RemoteLogSegmentMetadataUpdate(segmentId, time.milliseconds(), state, BROKER_ID_1);
    remoteLogSegmentLifecycleManager.updateRemoteLogSegmentMetadata(segMetadataUpdate);
    return segmentMetadata.createWithUpdates(segMetadataUpdate);
}
Also used : RemoteLogSegmentMetadataUpdate(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate) RemoteLogSegmentId(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId) RemoteLogSegmentMetadata(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata)

Example 15 with RemoteLogSegmentMetadata

use of org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata in project kafka by apache.

the class RemoteLogSegmentLifecycleTest method testCacheSegmentWithDeleteSegmentStartedState.

@ParameterizedTest(name = "remoteLogSegmentLifecycleManager = {0}")
@MethodSource("remoteLogSegmentLifecycleManagers")
public void testCacheSegmentWithDeleteSegmentStartedState(RemoteLogSegmentLifecycleManager remoteLogSegmentLifecycleManager) throws Exception {
    try {
        remoteLogSegmentLifecycleManager.initialize(topicIdPartition);
        // Create a segment and move it to state DELETE_SEGMENT_STARTED, and check for searching that segment and
        // listing the segments.
        RemoteLogSegmentMetadata segmentMetadata = createSegmentUpdateWithState(remoteLogSegmentLifecycleManager, Collections.singletonMap(0, 201L), 201L, 300L, RemoteLogSegmentState.DELETE_SEGMENT_STARTED);
        // Search should not return the above segment as their leader epoch state is cleared.
        Optional<RemoteLogSegmentMetadata> segmentMetadataForOffset250Epoch0 = remoteLogSegmentLifecycleManager.remoteLogSegmentMetadata(0, 250);
        Assertions.assertFalse(segmentMetadataForOffset250Epoch0.isPresent());
        checkListSegments(remoteLogSegmentLifecycleManager, 0, segmentMetadata);
    } finally {
        Utils.closeQuietly(remoteLogSegmentLifecycleManager, "RemoteLogSegmentLifecycleManager");
    }
}
Also used : RemoteLogSegmentMetadata(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest) MethodSource(org.junit.jupiter.params.provider.MethodSource)

Aggregations

RemoteLogSegmentMetadata (org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata)20 RemoteLogSegmentId (org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId)14 RemoteLogSegmentMetadataUpdate (org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate)8 HashMap (java.util.HashMap)6 Test (org.junit.jupiter.api.Test)6 ParameterizedTest (org.junit.jupiter.params.ParameterizedTest)6 MethodSource (org.junit.jupiter.params.provider.MethodSource)6 Map (java.util.Map)3 TopicIdPartition (org.apache.kafka.common.TopicIdPartition)3 TopicPartition (org.apache.kafka.common.TopicPartition)3 RemoteLogSegmentState (org.apache.kafka.server.log.remote.storage.RemoteLogSegmentState)3 RemoteResourceNotFoundException (org.apache.kafka.server.log.remote.storage.RemoteResourceNotFoundException)3 Path (java.nio.file.Path)2 ArrayList (java.util.ArrayList)2 NavigableMap (java.util.NavigableMap)2 ConcurrentHashMap (java.util.concurrent.ConcurrentHashMap)2 ConcurrentMap (java.util.concurrent.ConcurrentMap)2 Seq (scala.collection.Seq)2 File (java.io.File)1 Collections (java.util.Collections)1