Search in sources :

Example 1 with DurableLogFactory

use of io.pravega.segmentstore.server.logs.DurableLogFactory in project pravega by pravega.

the class ServiceBuilder method createOperationLogFactory.

protected OperationLogFactory createOperationLogFactory() {
    DurableDataLogFactory dataLogFactory = getSingleton(this.dataLogFactory, this.dataLogFactoryCreator);
    DurableLogConfig durableLogConfig = this.serviceBuilderConfig.getConfig(DurableLogConfig::builder);
    return new DurableLogFactory(durableLogConfig, dataLogFactory, this.coreExecutor);
}
Also used : DurableLogFactory(io.pravega.segmentstore.server.logs.DurableLogFactory) DurableLogConfig(io.pravega.segmentstore.server.logs.DurableLogConfig) InMemoryDurableDataLogFactory(io.pravega.segmentstore.storage.mocks.InMemoryDurableDataLogFactory) DurableDataLogFactory(io.pravega.segmentstore.storage.DurableDataLogFactory)

Example 2 with DurableLogFactory

use of io.pravega.segmentstore.server.logs.DurableLogFactory in project pravega by pravega.

the class StreamSegmentContainerTests method testMetadataCleanup.

/**
 * Tests the ability to clean up SegmentMetadata for those segments which have not been used recently.
 * This test does the following:
 * 1. Sets up a custom SegmentContainer with a hook into the metadataCleanup task
 * 2. Creates a segment and appends something to it, each time updating attributes (and verifies they were updated correctly).
 * 3. Waits for the segment to be forgotten (evicted).
 * 4. Requests info on the segment, validates it, then makes another append, seals it, at each step verifying it was done
 * correctly (checking Metadata, Attributes and Storage).
 * 5. Deletes the segment, waits for metadata to be cleared (via forcing another log truncation), re-creates the
 * same segment and validates that the old attributes did not "bleed in".
 */
@Test
public void testMetadataCleanup() throws Exception {
    final String segmentName = "segment";
    final UUID[] attributes = new UUID[] { Attributes.CREATION_TIME, UUID.randomUUID(), UUID.randomUUID(), UUID.randomUUID() };
    final byte[] appendData = "hello".getBytes();
    final Map<UUID, Long> expectedAttributes = new HashMap<>();
    // We need a special DL config so that we can force truncations after every operation - this will speed up metadata
    // eviction eligibility.
    final DurableLogConfig durableLogConfig = DurableLogConfig.builder().with(DurableLogConfig.CHECKPOINT_MIN_COMMIT_COUNT, 1).with(DurableLogConfig.CHECKPOINT_COMMIT_COUNT, 5).with(DurableLogConfig.CHECKPOINT_TOTAL_COMMIT_LENGTH, 10 * 1024 * 1024L).build();
    final TestContainerConfig containerConfig = new TestContainerConfig();
    containerConfig.setSegmentMetadataExpiration(Duration.ofMillis(250));
    @Cleanup TestContext context = new TestContext(containerConfig);
    OperationLogFactory localDurableLogFactory = new DurableLogFactory(durableLogConfig, context.dataLogFactory, executorService());
    @Cleanup MetadataCleanupContainer localContainer = new MetadataCleanupContainer(CONTAINER_ID, containerConfig, localDurableLogFactory, context.readIndexFactory, context.writerFactory, context.storageFactory, executorService());
    localContainer.startAsync().awaitRunning();
    // Create segment with initial attributes and verify they were set correctly.
    val initialAttributes = createAttributeUpdates(attributes);
    applyAttributes(initialAttributes, expectedAttributes);
    localContainer.createStreamSegment(segmentName, initialAttributes, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    SegmentProperties sp = localContainer.getStreamSegmentInfo(segmentName, true, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    SegmentMetadataComparer.assertSameAttributes("Unexpected attributes after segment creation.", expectedAttributes, sp);
    // Add one append with some attribute changes and verify they were set correctly.
    val appendAttributes = createAttributeUpdates(attributes);
    applyAttributes(appendAttributes, expectedAttributes);
    localContainer.append(segmentName, appendData, appendAttributes, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    sp = localContainer.getStreamSegmentInfo(segmentName, true, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    SegmentMetadataComparer.assertSameAttributes("Unexpected attributes after append.", expectedAttributes, sp);
    // Wait until the segment is forgotten.
    localContainer.triggerMetadataCleanup(Collections.singleton(segmentName)).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    // Now get attributes again and verify them.
    sp = localContainer.getStreamSegmentInfo(segmentName, true, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    SegmentMetadataComparer.assertSameAttributes("Unexpected attributes after eviction & resurrection.", expectedAttributes, sp);
    // Append again, and make sure we can append at the right offset.
    val secondAppendAttributes = createAttributeUpdates(attributes);
    applyAttributes(secondAppendAttributes, expectedAttributes);
    localContainer.append(segmentName, appendData.length, appendData, secondAppendAttributes, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    sp = localContainer.getStreamSegmentInfo(segmentName, true, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    Assert.assertEquals("Unexpected length from segment after eviction & resurrection.", 2 * appendData.length, sp.getLength());
    SegmentMetadataComparer.assertSameAttributes("Unexpected attributes after eviction & resurrection.", expectedAttributes, sp);
    // Seal (this should clear out non-dynamic attributes).
    expectedAttributes.keySet().removeIf(Attributes::isDynamic);
    localContainer.sealStreamSegment(segmentName, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    sp = localContainer.getStreamSegmentInfo(segmentName, true, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    SegmentMetadataComparer.assertSameAttributes("Unexpected attributes after seal.", expectedAttributes, sp);
    // Verify the segment actually made to Storage in one piece.
    waitForSegmentInStorage(sp, context).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    val storageInfo = context.storage.getStreamSegmentInfo(segmentName, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    Assert.assertEquals("Unexpected length in storage for segment.", sp.getLength(), storageInfo.getLength());
    // Delete segment and wait until it is forgotten again (we need to create another dummy segment so that we can
    // force a Metadata Truncation in order to facilitate that; this is the purpose of segment2).
    localContainer.deleteStreamSegment(segmentName, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    // Wait for the segment to be forgotten again.
    localContainer.triggerMetadataCleanup(Collections.singleton(segmentName)).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    // Now Create the Segment again and verify the old attributes were not "remembered".
    val newAttributes = createAttributeUpdates(attributes);
    applyAttributes(newAttributes, expectedAttributes);
    localContainer.createStreamSegment(segmentName, newAttributes, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    sp = localContainer.getStreamSegmentInfo(segmentName, true, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    SegmentMetadataComparer.assertSameAttributes("Unexpected attributes after deletion and re-creation.", expectedAttributes, sp);
}
Also used : lombok.val(lombok.val) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) HashMap(java.util.HashMap) Attributes(io.pravega.segmentstore.contracts.Attributes) Cleanup(lombok.Cleanup) OperationLogFactory(io.pravega.segmentstore.server.OperationLogFactory) DurableLogFactory(io.pravega.segmentstore.server.logs.DurableLogFactory) DurableLogConfig(io.pravega.segmentstore.server.logs.DurableLogConfig) AtomicLong(java.util.concurrent.atomic.AtomicLong) SegmentProperties(io.pravega.segmentstore.contracts.SegmentProperties) UUID(java.util.UUID) Test(org.junit.Test)

Example 3 with DurableLogFactory

use of io.pravega.segmentstore.server.logs.DurableLogFactory in project pravega by pravega.

the class StreamSegmentContainerTests method testStartOffline.

/**
 * Tests the ability of the StreamSegmentContainer to start in Offline mode (due to an offline DurableLog) and eventually
 * become online when the DurableLog becomes too.
 */
@Test
public void testStartOffline() throws Exception {
    @Cleanup val context = new TestContext();
    AtomicReference<DurableDataLog> dataLog = new AtomicReference<>();
    @Cleanup val dataLogFactory = new TestDurableDataLogFactory(context.dataLogFactory, dataLog::set);
    AtomicReference<OperationLog> durableLog = new AtomicReference<>();
    val durableLogFactory = new WatchableOperationLogFactory(new DurableLogFactory(DEFAULT_DURABLE_LOG_CONFIG, dataLogFactory, executorService()), durableLog::set);
    val containerFactory = new StreamSegmentContainerFactory(DEFAULT_CONFIG, durableLogFactory, context.readIndexFactory, context.writerFactory, context.storageFactory, executorService());
    // Write some data
    ArrayList<String> segmentNames = new ArrayList<>();
    HashMap<String, Long> lengths = new HashMap<>();
    HashMap<String, ByteArrayOutputStream> segmentContents = new HashMap<>();
    try (val container = containerFactory.createStreamSegmentContainer(CONTAINER_ID)) {
        container.startAsync().awaitRunning();
        ArrayList<CompletableFuture<Void>> opFutures = new ArrayList<>();
        for (int i = 0; i < SEGMENT_COUNT; i++) {
            String segmentName = getSegmentName(i);
            segmentNames.add(segmentName);
            opFutures.add(container.createStreamSegment(segmentName, null, TIMEOUT));
        }
        for (int i = 0; i < APPENDS_PER_SEGMENT / 2; i++) {
            for (String segmentName : segmentNames) {
                byte[] appendData = getAppendData(segmentName, i);
                opFutures.add(container.append(segmentName, appendData, null, TIMEOUT));
                lengths.put(segmentName, lengths.getOrDefault(segmentName, 0L) + appendData.length);
                recordAppend(segmentName, appendData, segmentContents);
            }
        }
        Futures.allOf(opFutures).join();
        // Disable the DurableDataLog.
        dataLog.get().disable();
    }
    // Start in "Offline" mode, verify operations cannot execute and then shut down - make sure we can shut down an offline container.
    try (val container = containerFactory.createStreamSegmentContainer(CONTAINER_ID)) {
        container.startAsync().awaitRunning();
        Assert.assertTrue("Expecting Segment Container to be offline.", container.isOffline());
        AssertExtensions.assertThrows("append() worked in offline mode.", () -> container.append("foo", new byte[1], null, TIMEOUT), ex -> ex instanceof ContainerOfflineException);
        AssertExtensions.assertThrows("getStreamSegmentInfo() worked in offline mode.", () -> container.getStreamSegmentInfo("foo", false, TIMEOUT), ex -> ex instanceof ContainerOfflineException);
        AssertExtensions.assertThrows("read() worked in offline mode.", () -> container.read("foo", 0, 1, TIMEOUT), ex -> ex instanceof ContainerOfflineException);
        container.stopAsync().awaitTerminated();
    }
    // Start in "Offline" mode and verify we can resume a normal startup.
    try (val container = containerFactory.createStreamSegmentContainer(CONTAINER_ID)) {
        container.startAsync().awaitRunning();
        Assert.assertTrue("Expecting Segment Container to be offline.", container.isOffline());
        dataLog.get().enable();
        // Wait for the DurableLog to become online.
        durableLog.get().awaitOnline().get(DEFAULT_DURABLE_LOG_CONFIG.getStartRetryDelay().toMillis() * 100, TimeUnit.MILLISECONDS);
        // Verify we can execute regular operations now.
        ArrayList<CompletableFuture<Void>> opFutures = new ArrayList<>();
        for (int i = 0; i < APPENDS_PER_SEGMENT / 2; i++) {
            for (String segmentName : segmentNames) {
                byte[] appendData = getAppendData(segmentName, i);
                opFutures.add(container.append(segmentName, appendData, null, TIMEOUT));
                lengths.put(segmentName, lengths.getOrDefault(segmentName, 0L) + appendData.length);
                recordAppend(segmentName, appendData, segmentContents);
            }
        }
        Futures.allOf(opFutures).join();
        // Verify all operations arrived in Storage.
        ArrayList<CompletableFuture<Void>> segmentsCompletion = new ArrayList<>();
        for (String segmentName : segmentNames) {
            SegmentProperties sp = container.getStreamSegmentInfo(segmentName, false, TIMEOUT).join();
            segmentsCompletion.add(waitForSegmentInStorage(sp, context));
        }
        Futures.allOf(segmentsCompletion).join();
        container.stopAsync().awaitTerminated();
    }
}
Also used : ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) HashMap(java.util.HashMap) ContainerOfflineException(io.pravega.segmentstore.server.ContainerOfflineException) ArrayList(java.util.ArrayList) OperationLog(io.pravega.segmentstore.server.OperationLog) Cleanup(lombok.Cleanup) DurableLogFactory(io.pravega.segmentstore.server.logs.DurableLogFactory) CompletableFuture(java.util.concurrent.CompletableFuture) lombok.val(lombok.val) DurableDataLog(io.pravega.segmentstore.storage.DurableDataLog) AtomicReference(java.util.concurrent.atomic.AtomicReference) ByteArrayOutputStream(java.io.ByteArrayOutputStream) AtomicLong(java.util.concurrent.atomic.AtomicLong) TestDurableDataLogFactory(io.pravega.segmentstore.server.TestDurableDataLogFactory) SegmentProperties(io.pravega.segmentstore.contracts.SegmentProperties) Test(org.junit.Test)

Example 4 with DurableLogFactory

use of io.pravega.segmentstore.server.logs.DurableLogFactory in project pravega by pravega.

the class StreamSegmentContainerTests method testMetadataCleanupRecovery.

/**
 * Tests the ability for the SegmentContainer to recover in the following scenario:
 * 1. A segment is created and recorded in the metadata (with some optional operations on it)
 * 2. The segment is evicted from the metadata.
 * 3. The segment is reactivated (with a new metadata mapping). No truncations happened since #2 above.
 * 4. Container shuts down and needs to recover. We need to ensure that recovery succeeds even with the new mapping
 * of the segment.
 */
@Test
public void testMetadataCleanupRecovery() throws Exception {
    final String segmentName = "segment";
    final byte[] appendData = "hello".getBytes();
    final TestContainerConfig containerConfig = new TestContainerConfig();
    containerConfig.setSegmentMetadataExpiration(Duration.ofMillis(250));
    @Cleanup TestContext context = new TestContext(containerConfig);
    val localDurableLogFactory = new DurableLogFactory(DEFAULT_DURABLE_LOG_CONFIG, context.dataLogFactory, executorService());
    SegmentProperties originalInfo;
    try (val container1 = new MetadataCleanupContainer(CONTAINER_ID, containerConfig, localDurableLogFactory, context.readIndexFactory, context.writerFactory, context.storageFactory, executorService())) {
        container1.startAsync().awaitRunning();
        // Create segment and make one append to it.
        container1.createStreamSegment(segmentName, null, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
        container1.append(segmentName, appendData, null, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
        // Wait until the segment is forgotten.
        container1.triggerMetadataCleanup(Collections.singleton(segmentName)).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
        // Add an append - this will force the segment to be reactivated, thus be registered with a different id.
        container1.append(segmentName, appendData, null, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
        originalInfo = container1.getStreamSegmentInfo(segmentName, true, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
        container1.stopAsync().awaitTerminated();
    }
    // Restart container and verify it started successfully.
    @Cleanup val container2 = new MetadataCleanupContainer(CONTAINER_ID, containerConfig, localDurableLogFactory, context.readIndexFactory, context.writerFactory, context.storageFactory, executorService());
    container2.startAsync().awaitRunning();
    val recoveredInfo = container2.getStreamSegmentInfo(segmentName, false, TIMEOUT).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    Assert.assertEquals("Unexpected length from recovered segment.", originalInfo.getLength(), recoveredInfo.getLength());
    container2.stopAsync().awaitTerminated();
}
Also used : lombok.val(lombok.val) DurableLogFactory(io.pravega.segmentstore.server.logs.DurableLogFactory) SegmentProperties(io.pravega.segmentstore.contracts.SegmentProperties) Cleanup(lombok.Cleanup) Test(org.junit.Test)

Example 5 with DurableLogFactory

use of io.pravega.segmentstore.server.logs.DurableLogFactory in project pravega by pravega.

the class StreamSegmentContainerTests method testForcedMetadataCleanup.

/**
 * Tests the case when the ContainerMetadata has filled up to capacity (with segments and we cannot map anymore segments).
 */
@Test
public void testForcedMetadataCleanup() throws Exception {
    final int maxSegmentCount = 3;
    final ContainerConfig containerConfig = ContainerConfig.builder().with(ContainerConfig.SEGMENT_METADATA_EXPIRATION_SECONDS, (int) DEFAULT_CONFIG.getSegmentMetadataExpiration().getSeconds()).with(ContainerConfig.MAX_ACTIVE_SEGMENT_COUNT, maxSegmentCount).build();
    // We need a special DL config so that we can force truncations after every operation - this will speed up metadata
    // eviction eligibility.
    final DurableLogConfig durableLogConfig = DurableLogConfig.builder().with(DurableLogConfig.CHECKPOINT_MIN_COMMIT_COUNT, 1).with(DurableLogConfig.CHECKPOINT_COMMIT_COUNT, 5).with(DurableLogConfig.CHECKPOINT_TOTAL_COMMIT_LENGTH, 10L * 1024 * 1024).build();
    @Cleanup TestContext context = new TestContext(containerConfig);
    OperationLogFactory localDurableLogFactory = new DurableLogFactory(durableLogConfig, context.dataLogFactory, executorService());
    @Cleanup MetadataCleanupContainer localContainer = new MetadataCleanupContainer(CONTAINER_ID, containerConfig, localDurableLogFactory, context.readIndexFactory, context.writerFactory, context.storageFactory, executorService());
    localContainer.startAsync().awaitRunning();
    // Create 4 segments and one transaction.
    String segment0 = getSegmentName(0);
    localContainer.createStreamSegment(segment0, null, TIMEOUT).join();
    String segment1 = getSegmentName(1);
    localContainer.createStreamSegment(segment1, null, TIMEOUT).join();
    String segment2 = getSegmentName(2);
    localContainer.createStreamSegment(segment2, null, TIMEOUT).join();
    String segment3 = getSegmentName(3);
    localContainer.createStreamSegment(segment3, null, TIMEOUT).join();
    String txn1 = localContainer.createTransaction(segment3, UUID.randomUUID(), null, TIMEOUT).join();
    // Activate one segment.
    activateSegment(segment2, localContainer).join();
    // Activate the transaction; this should fill up the metadata (itself + parent).
    activateSegment(txn1, localContainer).join();
    // Verify the transaction's parent has been activated.
    Assert.assertNotNull("Transaction's parent has not been activated.", localContainer.getStreamSegmentInfo(segment3, false, TIMEOUT).join());
    // At this point, the active segments should be: 2, 3 and Txn.
    // Verify we cannot activate any other segment.
    AssertExtensions.assertThrows("getSegmentId() allowed mapping more segments than the metadata can support.", () -> activateSegment(segment1, localContainer), ex -> ex instanceof TooManyActiveSegmentsException);
    AssertExtensions.assertThrows("getSegmentId() allowed mapping more segments than the metadata can support.", () -> activateSegment(segment0, localContainer), ex -> ex instanceof TooManyActiveSegmentsException);
    // Test the ability to forcefully evict items from the metadata when there is pressure and we need to register something new.
    // Case 1: following a Segment deletion.
    localContainer.deleteStreamSegment(segment2, TIMEOUT).join();
    val segment1Activation = tryActivate(localContainer, segment1, segment3);
    val segment1Info = segment1Activation.get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    Assert.assertNotNull("Unable to properly activate dormant segment (1).", segment1Info);
    // Case 2: following a Merge.
    localContainer.sealStreamSegment(txn1, TIMEOUT).join();
    localContainer.mergeTransaction(txn1, TIMEOUT).join();
    val segment0Activation = tryActivate(localContainer, segment0, segment3);
    val segment0Info = segment0Activation.get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    Assert.assertNotNull("Unable to properly activate dormant segment (0).", segment0Info);
    tryActivate(localContainer, segment1, segment3).get(TIMEOUT.toMillis(), TimeUnit.MILLISECONDS);
    // At this point the active segments should be: 0, 1 and 3.
    Assert.assertNotNull("Pre-activated segment did not stay in metadata (3).", localContainer.getStreamSegmentInfo(segment3, false, TIMEOUT).join());
    Assert.assertNotNull("Pre-activated segment did not stay in metadata (1).", localContainer.getStreamSegmentInfo(segment1, false, TIMEOUT).join());
    Assert.assertNotNull("Pre-activated segment did not stay in metadata (0).", localContainer.getStreamSegmentInfo(segment0, false, TIMEOUT).join());
    context.container.stopAsync().awaitTerminated();
}
Also used : lombok.val(lombok.val) DurableLogFactory(io.pravega.segmentstore.server.logs.DurableLogFactory) DurableLogConfig(io.pravega.segmentstore.server.logs.DurableLogConfig) TooManyActiveSegmentsException(io.pravega.segmentstore.contracts.TooManyActiveSegmentsException) Cleanup(lombok.Cleanup) OperationLogFactory(io.pravega.segmentstore.server.OperationLogFactory) Test(org.junit.Test)

Aggregations

DurableLogFactory (io.pravega.segmentstore.server.logs.DurableLogFactory)5 Cleanup (lombok.Cleanup)4 lombok.val (lombok.val)4 Test (org.junit.Test)4 SegmentProperties (io.pravega.segmentstore.contracts.SegmentProperties)3 DurableLogConfig (io.pravega.segmentstore.server.logs.DurableLogConfig)3 OperationLogFactory (io.pravega.segmentstore.server.OperationLogFactory)2 HashMap (java.util.HashMap)2 ConcurrentHashMap (java.util.concurrent.ConcurrentHashMap)2 AtomicLong (java.util.concurrent.atomic.AtomicLong)2 Attributes (io.pravega.segmentstore.contracts.Attributes)1 TooManyActiveSegmentsException (io.pravega.segmentstore.contracts.TooManyActiveSegmentsException)1 ContainerOfflineException (io.pravega.segmentstore.server.ContainerOfflineException)1 OperationLog (io.pravega.segmentstore.server.OperationLog)1 TestDurableDataLogFactory (io.pravega.segmentstore.server.TestDurableDataLogFactory)1 DurableDataLog (io.pravega.segmentstore.storage.DurableDataLog)1 DurableDataLogFactory (io.pravega.segmentstore.storage.DurableDataLogFactory)1 InMemoryDurableDataLogFactory (io.pravega.segmentstore.storage.mocks.InMemoryDurableDataLogFactory)1 ByteArrayOutputStream (java.io.ByteArrayOutputStream)1 ArrayList (java.util.ArrayList)1