Search in sources :

Example 6 with RemoteLogSegmentMetadata

use of org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata in project kafka by apache.

the class FileBasedRemoteLogMetadataCacheTest method testFileBasedRemoteLogMetadataCacheWithUnreferencedSegments.

@Test
public void testFileBasedRemoteLogMetadataCacheWithUnreferencedSegments() throws Exception {
    TopicIdPartition partition = new TopicIdPartition(Uuid.randomUuid(), new TopicPartition("test", 0));
    int brokerId = 0;
    Path path = TestUtils.tempDirectory().toPath();
    // Create file based metadata cache.
    FileBasedRemoteLogMetadataCache cache = new FileBasedRemoteLogMetadataCache(partition, path);
    // Add a segment with start offset as 0 for leader epoch 0.
    RemoteLogSegmentId segmentId1 = new RemoteLogSegmentId(partition, Uuid.randomUuid());
    RemoteLogSegmentMetadata metadata1 = new RemoteLogSegmentMetadata(segmentId1, 0, 100, System.currentTimeMillis(), brokerId, System.currentTimeMillis(), 1024 * 1024, Collections.singletonMap(0, 0L));
    cache.addCopyInProgressSegment(metadata1);
    RemoteLogSegmentMetadataUpdate metadataUpdate1 = new RemoteLogSegmentMetadataUpdate(segmentId1, System.currentTimeMillis(), RemoteLogSegmentState.COPY_SEGMENT_FINISHED, brokerId);
    cache.updateRemoteLogSegmentMetadata(metadataUpdate1);
    Optional<RemoteLogSegmentMetadata> receivedMetadata = cache.remoteLogSegmentMetadata(0, 0L);
    assertTrue(receivedMetadata.isPresent());
    assertEquals(metadata1.createWithUpdates(metadataUpdate1), receivedMetadata.get());
    // Add a new segment with start offset as 0 for leader epoch 0, which should replace the earlier segment.
    RemoteLogSegmentId segmentId2 = new RemoteLogSegmentId(partition, Uuid.randomUuid());
    RemoteLogSegmentMetadata metadata2 = new RemoteLogSegmentMetadata(segmentId2, 0, 900, System.currentTimeMillis(), brokerId, System.currentTimeMillis(), 1024 * 1024, Collections.singletonMap(0, 0L));
    cache.addCopyInProgressSegment(metadata2);
    RemoteLogSegmentMetadataUpdate metadataUpdate2 = new RemoteLogSegmentMetadataUpdate(segmentId2, System.currentTimeMillis(), RemoteLogSegmentState.COPY_SEGMENT_FINISHED, brokerId);
    cache.updateRemoteLogSegmentMetadata(metadataUpdate2);
    // Fetch segment for leader epoch:0 and start offset:0, it should be the newly added segment.
    Optional<RemoteLogSegmentMetadata> receivedMetadata2 = cache.remoteLogSegmentMetadata(0, 0L);
    assertTrue(receivedMetadata2.isPresent());
    assertEquals(metadata2.createWithUpdates(metadataUpdate2), receivedMetadata2.get());
    // Flush the cache to the file.
    cache.flushToFile(0, 0L);
    // Create a new cache with loading from the stored path.
    FileBasedRemoteLogMetadataCache loadedCache = new FileBasedRemoteLogMetadataCache(partition, path);
    // Fetch segment for leader epoch:0 and start offset:0, it should be metadata2.
    // This ensures that the ordering of metadata is taken care after loading from the stored snapshots.
    Optional<RemoteLogSegmentMetadata> receivedMetadataAfterLoad = loadedCache.remoteLogSegmentMetadata(0, 0L);
    assertTrue(receivedMetadataAfterLoad.isPresent());
    assertEquals(metadata2.createWithUpdates(metadataUpdate2), receivedMetadataAfterLoad.get());
}
Also used : Path(java.nio.file.Path) RemoteLogSegmentMetadataUpdate(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate) TopicPartition(org.apache.kafka.common.TopicPartition) TopicIdPartition(org.apache.kafka.common.TopicIdPartition) RemoteLogSegmentId(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId) RemoteLogSegmentMetadata(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata) Test(org.junit.jupiter.api.Test)

Example 7 with RemoteLogSegmentMetadata

use of org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata in project kafka by apache.

the class RemoteLogMetadataCacheTest method createSegmentUpdateWithState.

private RemoteLogSegmentMetadata createSegmentUpdateWithState(RemoteLogMetadataCache cache, Map<Integer, Long> segmentLeaderEpochs, long startOffset, long endOffset, RemoteLogSegmentState state) throws RemoteResourceNotFoundException {
    RemoteLogSegmentId segmentId = new RemoteLogSegmentId(TP0, Uuid.randomUuid());
    RemoteLogSegmentMetadata segmentMetadata = new RemoteLogSegmentMetadata(segmentId, startOffset, endOffset, -1L, BROKER_ID_0, time.milliseconds(), SEG_SIZE, segmentLeaderEpochs);
    cache.addCopyInProgressSegment(segmentMetadata);
    RemoteLogSegmentMetadataUpdate segMetadataUpdate = new RemoteLogSegmentMetadataUpdate(segmentId, time.milliseconds(), state, BROKER_ID_1);
    cache.updateRemoteLogSegmentMetadata(segMetadataUpdate);
    return segmentMetadata.createWithUpdates(segMetadataUpdate);
}
Also used : RemoteLogSegmentMetadataUpdate(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate) RemoteLogSegmentId(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId) RemoteLogSegmentMetadata(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata)

Example 8 with RemoteLogSegmentMetadata

use of org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata in project kafka by apache.

the class RemoteLogMetadataCacheTest method testAPIsWithInvalidArgs.

@Test
public void testAPIsWithInvalidArgs() {
    RemoteLogMetadataCache cache = new RemoteLogMetadataCache();
    Assertions.assertThrows(NullPointerException.class, () -> cache.addCopyInProgressSegment(null));
    Assertions.assertThrows(NullPointerException.class, () -> cache.updateRemoteLogSegmentMetadata(null));
    // Check for invalid state updates to addCopyInProgressSegment method.
    for (RemoteLogSegmentState state : RemoteLogSegmentState.values()) {
        if (state != RemoteLogSegmentState.COPY_SEGMENT_STARTED) {
            RemoteLogSegmentMetadata segmentMetadata = new RemoteLogSegmentMetadata(new RemoteLogSegmentId(TP0, Uuid.randomUuid()), 0, 100L, -1L, BROKER_ID_0, time.milliseconds(), SEG_SIZE, Collections.singletonMap(0, 0L));
            RemoteLogSegmentMetadata updatedMetadata = segmentMetadata.createWithUpdates(new RemoteLogSegmentMetadataUpdate(segmentMetadata.remoteLogSegmentId(), time.milliseconds(), state, BROKER_ID_1));
            Assertions.assertThrows(IllegalArgumentException.class, () -> cache.addCopyInProgressSegment(updatedMetadata));
        }
    }
    // Check for updating non existing segment-id.
    Assertions.assertThrows(RemoteResourceNotFoundException.class, () -> {
        RemoteLogSegmentId nonExistingId = new RemoteLogSegmentId(TP0, Uuid.randomUuid());
        cache.updateRemoteLogSegmentMetadata(new RemoteLogSegmentMetadataUpdate(nonExistingId, time.milliseconds(), RemoteLogSegmentState.DELETE_SEGMENT_STARTED, BROKER_ID_1));
    });
    // Check for invalid state transition.
    Assertions.assertThrows(IllegalStateException.class, () -> {
        RemoteLogSegmentMetadata segmentMetadata = createSegmentUpdateWithState(cache, Collections.singletonMap(0, 0L), 0, 100, RemoteLogSegmentState.COPY_SEGMENT_FINISHED);
        cache.updateRemoteLogSegmentMetadata(new RemoteLogSegmentMetadataUpdate(segmentMetadata.remoteLogSegmentId(), time.milliseconds(), RemoteLogSegmentState.DELETE_SEGMENT_FINISHED, BROKER_ID_1));
    });
}
Also used : RemoteLogSegmentMetadataUpdate(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate) RemoteLogSegmentState(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentState) RemoteLogSegmentId(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId) RemoteLogSegmentMetadata(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata) Test(org.junit.jupiter.api.Test)

Example 9 with RemoteLogSegmentMetadata

use of org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata in project kafka by apache.

the class RemoteLogMetadataSerdeTest method createRemoteLogSegmentMetadata.

private RemoteLogSegmentMetadata createRemoteLogSegmentMetadata() {
    Map<Integer, Long> segLeaderEpochs = new HashMap<>();
    segLeaderEpochs.put(0, 0L);
    segLeaderEpochs.put(1, 20L);
    segLeaderEpochs.put(2, 80L);
    RemoteLogSegmentId remoteLogSegmentId = new RemoteLogSegmentId(TP0, Uuid.randomUuid());
    return new RemoteLogSegmentMetadata(remoteLogSegmentId, 0L, 100L, -1L, 1, time.milliseconds(), 1024, segLeaderEpochs);
}
Also used : HashMap(java.util.HashMap) RemoteLogSegmentId(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId) RemoteLogSegmentMetadata(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata)

Example 10 with RemoteLogSegmentMetadata

use of org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata in project kafka by apache.

the class RemoteLogMetadataSerdeTest method testRemoteLogSegmentMetadataSerde.

@Test
public void testRemoteLogSegmentMetadataSerde() {
    RemoteLogSegmentMetadata remoteLogSegmentMetadata = createRemoteLogSegmentMetadata();
    doTestRemoteLogMetadataSerde(remoteLogSegmentMetadata);
}
Also used : RemoteLogSegmentMetadata(org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata) Test(org.junit.jupiter.api.Test)

Aggregations

RemoteLogSegmentMetadata (org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadata)20 RemoteLogSegmentId (org.apache.kafka.server.log.remote.storage.RemoteLogSegmentId)14 RemoteLogSegmentMetadataUpdate (org.apache.kafka.server.log.remote.storage.RemoteLogSegmentMetadataUpdate)8 HashMap (java.util.HashMap)6 Test (org.junit.jupiter.api.Test)6 ParameterizedTest (org.junit.jupiter.params.ParameterizedTest)6 MethodSource (org.junit.jupiter.params.provider.MethodSource)6 Map (java.util.Map)3 TopicIdPartition (org.apache.kafka.common.TopicIdPartition)3 TopicPartition (org.apache.kafka.common.TopicPartition)3 RemoteLogSegmentState (org.apache.kafka.server.log.remote.storage.RemoteLogSegmentState)3 RemoteResourceNotFoundException (org.apache.kafka.server.log.remote.storage.RemoteResourceNotFoundException)3 Path (java.nio.file.Path)2 ArrayList (java.util.ArrayList)2 NavigableMap (java.util.NavigableMap)2 ConcurrentHashMap (java.util.concurrent.ConcurrentHashMap)2 ConcurrentMap (java.util.concurrent.ConcurrentMap)2 Seq (scala.collection.Seq)2 File (java.io.File)1 Collections (java.util.Collections)1