Search in sources :

Example 36 with SegmentMetadata

use of io.pravega.segmentstore.server.SegmentMetadata in project pravega by pravega.

the class StreamSegmentReadIndex method completeMerge.

/**
 * Executes Step 2 of the 2-Step Merge Process.
 * The StreamSegments are physically merged in the Storage. The Source StreamSegment does not exist anymore.
 * The ReadIndex entries of the two Streams are actually joined together.
 *
 * @param sourceStreamSegmentId The Id of the StreamSegment that was merged into this one.
 */
void completeMerge(long sourceStreamSegmentId) {
    long traceId = LoggerHelpers.traceEnterWithContext(log, this.traceObjectId, "completeMerge", sourceStreamSegmentId);
    Exceptions.checkNotClosed(this.closed, this);
    // Find the appropriate redirect entry.
    RedirectIndexEntry redirectEntry;
    long mergeKey;
    synchronized (this.lock) {
        mergeKey = this.mergeOffsets.getOrDefault(sourceStreamSegmentId, -1L);
        Exceptions.checkArgument(mergeKey >= 0, "sourceSegmentStreamId", "Given StreamSegmentReadIndex's merger with this one has not been initiated using beginMerge. Cannot finalize the merger.");
        // Get the RedirectIndexEntry. These types of entries are sticky in the cache and DO NOT contribute to the
        // cache Stats. They are already accounted for in the other Segment's ReadIndex.
        ReadIndexEntry indexEntry = this.indexEntries.get(mergeKey);
        assert indexEntry != null && !indexEntry.isDataEntry() : String.format("mergeOffsets points to a ReadIndexEntry that does not exist or is of the wrong type. sourceStreamSegmentId = %d, offset = %d, treeEntry = %s.", sourceStreamSegmentId, mergeKey, indexEntry);
        redirectEntry = (RedirectIndexEntry) indexEntry;
    }
    StreamSegmentReadIndex sourceIndex = redirectEntry.getRedirectReadIndex();
    SegmentMetadata sourceMetadata = sourceIndex.metadata;
    Exceptions.checkArgument(sourceMetadata.isDeleted(), "sourceSegmentStreamId", "Given StreamSegmentReadIndex refers to a StreamSegment that has not been deleted yet.");
    // Get all the entries from the source index and append them here.
    List<MergedIndexEntry> sourceEntries = sourceIndex.getAllEntries(redirectEntry.getStreamSegmentOffset());
    synchronized (this.lock) {
        // Remove redirect entry (again, no need to update the Cache Stats, as this is a RedirectIndexEntry).
        this.indexEntries.remove(mergeKey);
        this.mergeOffsets.remove(sourceStreamSegmentId);
        sourceEntries.forEach(this::addToIndex);
    }
    LoggerHelpers.traceLeave(log, this.traceObjectId, "completeMerge", traceId);
}
Also used : SegmentMetadata(io.pravega.segmentstore.server.SegmentMetadata)

Example 37 with SegmentMetadata

use of io.pravega.segmentstore.server.SegmentMetadata in project pravega by pravega.

the class StreamSegmentContainerMetadataTests method shouldExpectRemoval.

private boolean shouldExpectRemoval(long segmentId, ContainerMetadata m, Map<Long, Long> transactions, long cutoffSeqNo, long truncatedSeqNo) {
    SegmentMetadata segmentMetadata = m.getStreamSegmentMetadata(segmentId);
    if (segmentMetadata.isDeleted()) {
        // Deleted segments are immediately eligible for eviction as soon as their last op is truncated out.
        return segmentMetadata.getLastUsed() <= truncatedSeqNo;
    }
    SegmentMetadata transactionMetadata = null;
    if (transactions.containsKey(segmentId)) {
        transactionMetadata = m.getStreamSegmentMetadata(transactions.get(segmentId));
    }
    boolean expectedRemoved = segmentMetadata.getLastUsed() < cutoffSeqNo;
    if (transactionMetadata != null) {
        expectedRemoved &= transactionMetadata.getLastUsed() < cutoffSeqNo;
    }
    return expectedRemoved;
}
Also used : UpdateableSegmentMetadata(io.pravega.segmentstore.server.UpdateableSegmentMetadata) SegmentMetadata(io.pravega.segmentstore.server.SegmentMetadata)

Example 38 with SegmentMetadata

use of io.pravega.segmentstore.server.SegmentMetadata in project pravega by pravega.

the class SegmentAggregatorTests method testAddWithBadInput.

/**
 * Tests the add() method with invalid arguments.
 */
@Test
public void testAddWithBadInput() throws Exception {
    final long badTransactionId = 12345;
    final long badParentId = 56789;
    final String badParentName = "Foo_Parent";
    final String badTransactionName = "Foo_Transaction";
    @Cleanup TestContext context = new TestContext(DEFAULT_CONFIG);
    // We only needs one Transaction for this test.
    SegmentAggregator transactionAggregator = context.transactionAggregators[0];
    SegmentMetadata transactionMetadata = transactionAggregator.getMetadata();
    context.segmentAggregator.initialize(TIMEOUT).join();
    transactionAggregator.initialize(TIMEOUT).join();
    // Create 2 more segments that can be used to verify MergeSegmentOperation.
    context.containerMetadata.mapStreamSegmentId(badParentName, badParentId);
    UpdateableSegmentMetadata badTransactionMetadata = context.containerMetadata.mapStreamSegmentId(badTransactionName, badTransactionId);
    badTransactionMetadata.setLength(0);
    badTransactionMetadata.setStorageLength(0);
    // 1. MergeSegmentOperation
    // Verify that MergeSegmentOperation cannot be added to the Segment to be merged.
    AssertExtensions.assertThrows("add() allowed a MergeSegmentOperation on the Transaction segment.", () -> transactionAggregator.add(generateSimpleMergeTransaction(transactionMetadata.getId(), context)), ex -> ex instanceof IllegalArgumentException);
    // 2. StreamSegmentSealOperation.
    // 2a. Verify we cannot add a StreamSegmentSealOperation if the segment is not sealed yet.
    AssertExtensions.assertThrows("add() allowed a StreamSegmentSealOperation for a non-sealed segment.", () -> {
        @Cleanup SegmentAggregator badTransactionAggregator = new SegmentAggregator(badTransactionMetadata, context.dataSource, context.storage, DEFAULT_CONFIG, context.timer, executorService());
        badTransactionAggregator.initialize(TIMEOUT).join();
        badTransactionAggregator.add(generateSimpleSeal(badTransactionId, context));
    }, ex -> ex instanceof DataCorruptionException);
    // 2b. Verify that nothing is allowed after Seal (after adding one append to and sealing the Transaction Segment).
    StorageOperation transactionAppend1 = generateAppendAndUpdateMetadata(0, transactionMetadata.getId(), context);
    transactionAggregator.add(transactionAppend1);
    transactionAggregator.add(generateSealAndUpdateMetadata(transactionMetadata.getId(), context));
    AssertExtensions.assertThrows("add() allowed operation after seal.", () -> transactionAggregator.add(generateSimpleAppend(transactionMetadata.getId(), context)), ex -> ex instanceof DataCorruptionException);
    // 3. CachedStreamSegmentAppendOperation.
    final StorageOperation parentAppend1 = generateAppendAndUpdateMetadata(0, SEGMENT_ID, context);
    // 3a. Verify we cannot add StreamSegmentAppendOperations.
    AssertExtensions.assertThrows("add() allowed a StreamSegmentAppendOperation.", () -> {
        // We have the correct offset, but we did not increase the Length.
        StreamSegmentAppendOperation badAppend = new StreamSegmentAppendOperation(parentAppend1.getStreamSegmentId(), parentAppend1.getStreamSegmentOffset(), new ByteArraySegment(new byte[(int) parentAppend1.getLength()]), null);
        context.segmentAggregator.add(badAppend);
    }, ex -> ex instanceof IllegalArgumentException);
    // Add this one append to the parent (nothing unusual here); we'll use this for the next tests.
    context.segmentAggregator.add(parentAppend1);
    // 3b. Verify we cannot add anything beyond the DurableLogOffset (offset or offset+length).
    val appendData = new ByteArraySegment("foo".getBytes());
    AssertExtensions.assertThrows("add() allowed an operation beyond the DurableLogOffset (offset).", () -> {
        // We have the correct offset, but we did not increase the Length.
        StreamSegmentAppendOperation badAppend = new StreamSegmentAppendOperation(context.segmentAggregator.getMetadata().getId(), appendData, null);
        badAppend.setStreamSegmentOffset(parentAppend1.getStreamSegmentOffset() + parentAppend1.getLength());
        context.segmentAggregator.add(new CachedStreamSegmentAppendOperation(badAppend));
    }, ex -> ex instanceof DataCorruptionException);
    ((UpdateableSegmentMetadata) context.segmentAggregator.getMetadata()).setLength(parentAppend1.getStreamSegmentOffset() + parentAppend1.getLength() + 1);
    AssertExtensions.assertThrows("add() allowed an operation beyond the DurableLogOffset (offset+length).", () -> {
        // We have the correct offset, but we the append exceeds the Length by 1 byte.
        StreamSegmentAppendOperation badAppend = new StreamSegmentAppendOperation(context.segmentAggregator.getMetadata().getId(), appendData, null);
        badAppend.setStreamSegmentOffset(parentAppend1.getStreamSegmentOffset() + parentAppend1.getLength());
        context.segmentAggregator.add(new CachedStreamSegmentAppendOperation(badAppend));
    }, ex -> ex instanceof DataCorruptionException);
    // 3c. Verify contiguity (offsets - we cannot have gaps in the data).
    AssertExtensions.assertThrows("add() allowed an operation with wrong offset (too small).", () -> {
        StreamSegmentAppendOperation badOffsetAppend = new StreamSegmentAppendOperation(context.segmentAggregator.getMetadata().getId(), appendData, null);
        badOffsetAppend.setStreamSegmentOffset(0);
        context.segmentAggregator.add(new CachedStreamSegmentAppendOperation(badOffsetAppend));
    }, ex -> ex instanceof DataCorruptionException);
    AssertExtensions.assertThrows("add() allowed an operation with wrong offset (too large).", () -> {
        StreamSegmentAppendOperation badOffsetAppend = new StreamSegmentAppendOperation(context.segmentAggregator.getMetadata().getId(), appendData, null);
        badOffsetAppend.setStreamSegmentOffset(parentAppend1.getStreamSegmentOffset() + parentAppend1.getLength() + 1);
        context.segmentAggregator.add(new CachedStreamSegmentAppendOperation(badOffsetAppend));
    }, ex -> ex instanceof DataCorruptionException);
    AssertExtensions.assertThrows("add() allowed an operation with wrong offset (too large, but no pending operations).", () -> {
        @Cleanup SegmentAggregator badTransactionAggregator = new SegmentAggregator(badTransactionMetadata, context.dataSource, context.storage, DEFAULT_CONFIG, context.timer, executorService());
        badTransactionMetadata.setLength(100);
        badTransactionAggregator.initialize(TIMEOUT).join();
        StreamSegmentAppendOperation badOffsetAppend = new StreamSegmentAppendOperation(context.segmentAggregator.getMetadata().getId(), appendData, null);
        badOffsetAppend.setStreamSegmentOffset(1);
        context.segmentAggregator.add(new CachedStreamSegmentAppendOperation(badOffsetAppend));
    }, ex -> ex instanceof DataCorruptionException);
    // 4. Verify Segment Id match.
    AssertExtensions.assertThrows("add() allowed an Append operation with wrong Segment Id.", () -> {
        StreamSegmentAppendOperation badIdAppend = new StreamSegmentAppendOperation(Integer.MAX_VALUE, appendData, null);
        badIdAppend.setStreamSegmentOffset(parentAppend1.getStreamSegmentOffset() + parentAppend1.getLength());
        context.segmentAggregator.add(new CachedStreamSegmentAppendOperation(badIdAppend));
    }, ex -> ex instanceof IllegalArgumentException);
    AssertExtensions.assertThrows("add() allowed a StreamSegmentSealOperation with wrong SegmentId.", () -> {
        StreamSegmentSealOperation badIdSeal = new StreamSegmentSealOperation(Integer.MAX_VALUE);
        badIdSeal.setStreamSegmentOffset(parentAppend1.getStreamSegmentOffset() + parentAppend1.getLength());
        context.segmentAggregator.add(badIdSeal);
    }, ex -> ex instanceof IllegalArgumentException);
    AssertExtensions.assertThrows("add() allowed a MergeSegmentOperation with wrong SegmentId.", () -> {
        MergeSegmentOperation badIdMerge = new MergeSegmentOperation(Integer.MAX_VALUE, transactionMetadata.getId());
        badIdMerge.setStreamSegmentOffset(parentAppend1.getStreamSegmentOffset() + parentAppend1.getLength());
        badIdMerge.setLength(1);
        context.segmentAggregator.add(badIdMerge);
    }, ex -> ex instanceof IllegalArgumentException);
    // 5. Truncations.
    AssertExtensions.assertThrows("add() allowed a StreamSegmentTruncateOperation with a truncation offset beyond the one in the metadata.", () -> {
        StreamSegmentTruncateOperation op = new StreamSegmentTruncateOperation(SEGMENT_ID, 10);
        op.setSequenceNumber(context.containerMetadata.nextOperationSequenceNumber());
        context.segmentAggregator.add(op);
    }, ex -> ex instanceof DataCorruptionException);
}
Also used : lombok.val(lombok.val) UpdateableSegmentMetadata(io.pravega.segmentstore.server.UpdateableSegmentMetadata) ByteArraySegment(io.pravega.common.util.ByteArraySegment) StreamSegmentSealOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentSealOperation) CachedStreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.CachedStreamSegmentAppendOperation) StreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentAppendOperation) Cleanup(lombok.Cleanup) MergeSegmentOperation(io.pravega.segmentstore.server.logs.operations.MergeSegmentOperation) CachedStreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.CachedStreamSegmentAppendOperation) UpdateableSegmentMetadata(io.pravega.segmentstore.server.UpdateableSegmentMetadata) SegmentMetadata(io.pravega.segmentstore.server.SegmentMetadata) StreamSegmentTruncateOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentTruncateOperation) StorageOperation(io.pravega.segmentstore.server.logs.operations.StorageOperation) DataCorruptionException(io.pravega.segmentstore.server.DataCorruptionException) Test(org.junit.Test)

Example 39 with SegmentMetadata

use of io.pravega.segmentstore.server.SegmentMetadata in project pravega by pravega.

the class SegmentAggregatorTests method testMerge.

/**
 * Tests the flush() method with Append and MergeTransactionOperations.
 * Overall strategy:
 * 1. Create one Parent Segment and N Transaction Segments.
 * 2. Populate all Transaction Segments with data.
 * 3. Seal the first N/2 Transaction Segments.
 * 4. Add some Appends, interspersed with Merge Transaction Ops to the Parent (for all Transactions)
 * 5. Call flush() repeatedly on all Segments, until nothing is flushed anymore. Verify only the first N/2 Transactions were merged.
 * 6. Seal the remaining N/2 Transaction Segments
 * 7. Call flush() repeatedly on all Segments, until nothing is flushed anymore. Verify all Transactions were merged.
 * 8. Verify the Parent Segment has all the data (from itself and its Transactions), in the correct order.
 */
@Test
@SuppressWarnings("checkstyle:CyclomaticComplexity")
public void testMerge() throws Exception {
    // This is number of appends per Segment/Transaction - there will be a lot of appends here.
    final int appendCount = 100;
    final WriterConfig config = WriterConfig.builder().with(WriterConfig.FLUSH_THRESHOLD_BYTES, // Extra high length threshold.
    appendCount * 50).with(WriterConfig.FLUSH_THRESHOLD_MILLIS, 1000L).with(WriterConfig.MAX_FLUSH_SIZE_BYTES, 10000).with(WriterConfig.MIN_READ_TIMEOUT_MILLIS, 10L).build();
    @Cleanup TestContext context = new TestContext(config);
    // Initialize all segments.
    context.segmentAggregator.initialize(TIMEOUT).join();
    for (SegmentAggregator a : context.transactionAggregators) {
        a.initialize(TIMEOUT).join();
    }
    // Store written data by segment - so we can check it later.
    HashMap<Long, ByteArrayOutputStream> dataBySegment = new HashMap<>();
    val actualMergeOpAck = new ArrayList<Map.Entry<Long, Long>>();
    context.dataSource.setCompleteMergeCallback((target, source) -> actualMergeOpAck.add(new AbstractMap.SimpleImmutableEntry<Long, Long>(target, source)));
    // Add a few appends to each Transaction aggregator and to the parent aggregator.
    // Seal the first half of the Transaction aggregators (thus, those Transactions will be fully flushed).
    HashSet<Long> sealedTransactionIds = new HashSet<>();
    for (int i = 0; i < context.transactionAggregators.length; i++) {
        SegmentAggregator transactionAggregator = context.transactionAggregators[i];
        long transactionId = transactionAggregator.getMetadata().getId();
        ByteArrayOutputStream writtenData = new ByteArrayOutputStream();
        dataBySegment.put(transactionId, writtenData);
        for (int appendId = 0; appendId < appendCount; appendId++) {
            StorageOperation appendOp = generateAppendAndUpdateMetadata(appendId, transactionId, context);
            transactionAggregator.add(appendOp);
            getAppendData(appendOp, writtenData, context);
        }
        if (i < context.transactionAggregators.length / 2) {
            // We only seal the first half.
            transactionAggregator.add(generateSealAndUpdateMetadata(transactionId, context));
            sealedTransactionIds.add(transactionId);
        }
    }
    // Add MergeTransactionOperations to the parent aggregator, making sure we have both the following cases:
    // * Two or more consecutive MergeTransactionOperations both for Transactions that are sealed and for those that are not.
    // * MergeTransactionOperations with appends interspersed between them (in the parent), both for sealed Transactions and non-sealed Transactions.
    long parentSegmentId = context.segmentAggregator.getMetadata().getId();
    @Cleanup ByteArrayOutputStream parentData = new ByteArrayOutputStream();
    for (int transIndex = 0; transIndex < context.transactionAggregators.length; transIndex++) {
        // This helps ensure that we have both interspersed appends, and consecutive MergeTransactionOperations in the parent.
        if (transIndex % 2 == 1) {
            StorageOperation appendOp = generateAppendAndUpdateMetadata(transIndex, parentSegmentId, context);
            context.segmentAggregator.add(appendOp);
            getAppendData(appendOp, parentData, context);
        }
        // Merge this Transaction into the parent & record its data in the final parent data array.
        long transactionId = context.transactionAggregators[transIndex].getMetadata().getId();
        context.segmentAggregator.add(generateMergeTransactionAndUpdateMetadata(transactionId, context));
        ByteArrayOutputStream transactionData = dataBySegment.get(transactionId);
        parentData.write(transactionData.toByteArray());
        transactionData.close();
    }
    // Flush all the Aggregators as long as at least one of them reports being able to flush and that it did flush something.
    flushAllSegments(context);
    // Now check to see that only those Transactions that were sealed were merged.
    for (SegmentAggregator transactionAggregator : context.transactionAggregators) {
        SegmentMetadata transactionMetadata = transactionAggregator.getMetadata();
        boolean expectedMerged = sealedTransactionIds.contains(transactionMetadata.getId());
        if (expectedMerged) {
            Assert.assertTrue("Transaction to be merged was not marked as deleted in metadata.", transactionMetadata.isDeleted());
            Assert.assertFalse("Transaction to be merged still exists in storage.", context.storage.exists(transactionMetadata.getName(), TIMEOUT).join());
        } else {
            Assert.assertFalse("Transaction not to be merged was marked as deleted in metadata.", transactionMetadata.isDeleted());
            boolean exists = context.storage.exists(transactionMetadata.getName(), TIMEOUT).join();
            if (exists) {
                // We're not expecting this to exist, but if it does, do check it.
                SegmentProperties sp = context.storage.getStreamSegmentInfo(transactionMetadata.getName(), TIMEOUT).join();
                Assert.assertFalse("Transaction not to be merged is sealed in storage.", sp.isSealed());
            }
        }
    }
    // Then seal the rest of the Transactions and re-run the flush on the parent a few times.
    for (SegmentAggregator a : context.transactionAggregators) {
        long transactionId = a.getMetadata().getId();
        if (!sealedTransactionIds.contains(transactionId)) {
            // This Transaction was not sealed (and merged) previously. Do it now.
            a.add(generateSealAndUpdateMetadata(transactionId, context));
            sealedTransactionIds.add(transactionId);
        }
    }
    // Flush all the Aggregators as long as at least one of them reports being able to flush and that it did flush something.
    flushAllSegments(context);
    // Verify that all Transactions are now fully merged.
    for (SegmentAggregator transactionAggregator : context.transactionAggregators) {
        SegmentMetadata transactionMetadata = transactionAggregator.getMetadata();
        Assert.assertTrue("Merged Transaction was not marked as deleted in metadata.", transactionMetadata.isDeleted());
        Assert.assertFalse("Merged Transaction still exists in storage.", context.storage.exists(transactionMetadata.getName(), TIMEOUT).join());
    }
    // Verify that in the end, the contents of the parents is as expected.
    verifySegmentData(parentData.toByteArray(), context);
    // Verify calls to completeMerge.
    val expectedMergeOpSources = Arrays.stream(context.transactionAggregators).map(a -> a.getMetadata().getId()).collect(Collectors.toSet());
    Assert.assertEquals("Unexpected number of calls to completeMerge.", expectedMergeOpSources.size(), actualMergeOpAck.size());
    val actualMergeOpSources = actualMergeOpAck.stream().map(Map.Entry::getValue).collect(Collectors.toSet());
    AssertExtensions.assertContainsSameElements("Unexpected sources for invocation to completeMerge.", expectedMergeOpSources, actualMergeOpSources);
    for (Map.Entry<Long, Long> e : actualMergeOpAck) {
        Assert.assertEquals("Unexpected target for invocation to completeMerge.", context.segmentAggregator.getMetadata().getId(), (long) e.getKey());
    }
}
Also used : lombok.val(lombok.val) Arrays(java.util.Arrays) StreamSegmentNotExistsException(io.pravega.segmentstore.contracts.StreamSegmentNotExistsException) AssertExtensions(io.pravega.test.common.AssertExtensions) MergeSegmentOperation(io.pravega.segmentstore.server.logs.operations.MergeSegmentOperation) RequiredArgsConstructor(lombok.RequiredArgsConstructor) Cleanup(lombok.Cleanup) Random(java.util.Random) UpdateableSegmentMetadata(io.pravega.segmentstore.server.UpdateableSegmentMetadata) SegmentProperties(io.pravega.segmentstore.contracts.SegmentProperties) AttributeUpdate(io.pravega.segmentstore.contracts.AttributeUpdate) SegmentHandle(io.pravega.segmentstore.storage.SegmentHandle) ByteArrayInputStream(java.io.ByteArrayInputStream) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) BufferView(io.pravega.common.util.BufferView) Duration(java.time.Duration) Map(java.util.Map) Operation(io.pravega.segmentstore.server.logs.operations.Operation) WriterFlushResult(io.pravega.segmentstore.server.WriterFlushResult) StreamSegmentTruncateOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentTruncateOperation) Attributes(io.pravega.segmentstore.contracts.Attributes) InMemoryStorage(io.pravega.segmentstore.storage.mocks.InMemoryStorage) Collectors(java.util.stream.Collectors) ErrorInjector(io.pravega.test.common.ErrorInjector) ByteBufferOutputStream(io.pravega.common.io.ByteBufferOutputStream) Stream(java.util.stream.Stream) ByteArraySegment(io.pravega.common.util.ByteArraySegment) ThreadPooledTestSuite(io.pravega.test.common.ThreadPooledTestSuite) BadOffsetException(io.pravega.segmentstore.contracts.BadOffsetException) Futures(io.pravega.common.concurrent.Futures) TestStorage(io.pravega.segmentstore.server.TestStorage) MetadataBuilder(io.pravega.segmentstore.server.MetadataBuilder) ByteArrayOutputStream(java.io.ByteArrayOutputStream) Exceptions(io.pravega.common.Exceptions) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) HashMap(java.util.HashMap) CompletableFuture(java.util.concurrent.CompletableFuture) AtomicReference(java.util.concurrent.atomic.AtomicReference) Supplier(java.util.function.Supplier) ArrayList(java.util.ArrayList) HashSet(java.util.HashSet) UpdateableContainerMetadata(io.pravega.segmentstore.server.UpdateableContainerMetadata) SegmentMetadata(io.pravega.segmentstore.server.SegmentMetadata) Timeout(org.junit.rules.Timeout) OutputStream(java.io.OutputStream) ManualTimer(io.pravega.segmentstore.server.ManualTimer) UpdateAttributesOperation(io.pravega.segmentstore.server.logs.operations.UpdateAttributesOperation) AttributeId(io.pravega.segmentstore.contracts.AttributeId) IntentionalException(io.pravega.test.common.IntentionalException) lombok.val(lombok.val) IOException(java.io.IOException) Test(org.junit.Test) TimeUnit(java.util.concurrent.TimeUnit) AtomicLong(java.util.concurrent.atomic.AtomicLong) AbstractMap(java.util.AbstractMap) AttributeUpdateCollection(io.pravega.segmentstore.contracts.AttributeUpdateCollection) Rule(org.junit.Rule) CachedStreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.CachedStreamSegmentAppendOperation) StorageOperation(io.pravega.segmentstore.server.logs.operations.StorageOperation) TreeMap(java.util.TreeMap) StreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentAppendOperation) Preconditions(com.google.common.base.Preconditions) AttributeUpdateType(io.pravega.segmentstore.contracts.AttributeUpdateType) RandomFactory(io.pravega.common.hash.RandomFactory) DataCorruptionException(io.pravega.segmentstore.server.DataCorruptionException) Assert(org.junit.Assert) Collections(java.util.Collections) DeleteSegmentOperation(io.pravega.segmentstore.server.logs.operations.DeleteSegmentOperation) StreamSegmentSealOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentSealOperation) InputStream(java.io.InputStream) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) ByteArrayOutputStream(java.io.ByteArrayOutputStream) Cleanup(lombok.Cleanup) UpdateableSegmentMetadata(io.pravega.segmentstore.server.UpdateableSegmentMetadata) SegmentMetadata(io.pravega.segmentstore.server.SegmentMetadata) AtomicLong(java.util.concurrent.atomic.AtomicLong) StorageOperation(io.pravega.segmentstore.server.logs.operations.StorageOperation) SegmentProperties(io.pravega.segmentstore.contracts.SegmentProperties) Map(java.util.Map) HashMap(java.util.HashMap) AbstractMap(java.util.AbstractMap) TreeMap(java.util.TreeMap) HashSet(java.util.HashSet) Test(org.junit.Test)

Example 40 with SegmentMetadata

use of io.pravega.segmentstore.server.SegmentMetadata in project pravega by pravega.

the class ContainerKeyIndexTests method testCriticalSegmentThrottling.

/**
 * Tests that system-critical Segments get the right amount of credits.
 */
@Test
public void testCriticalSegmentThrottling() {
    @Cleanup val context = new TestContext();
    @Cleanup ContainerKeyIndex.SegmentTracker segmentTracker = context.index.new SegmentTracker();
    DirectSegmentAccess mockSegment = Mockito.mock(DirectSegmentAccess.class);
    SegmentMetadata mockSegmentMetadata = Mockito.mock(SegmentMetadata.class);
    // System critical segment.
    SegmentType segmentType = SegmentType.builder().critical().system().build();
    Mockito.when(mockSegmentMetadata.getType()).thenReturn(segmentType);
    Mockito.when(mockSegment.getInfo()).thenReturn(mockSegmentMetadata);
    Mockito.when(mockSegment.getSegmentId()).thenReturn(1L);
    // Update size is 1 byte smaller than the limit, so it should not block.
    int updateSize = TableExtensionConfig.SYSTEM_CRITICAL_MAX_UNINDEXED_LENGTH.getDefaultValue() - 1;
    segmentTracker.throttleIfNeeded(mockSegment, () -> CompletableFuture.completedFuture(null), updateSize).join();
    Assert.assertEquals(segmentTracker.getUnindexedSizeBytes(1L), TableExtensionConfig.SYSTEM_CRITICAL_MAX_UNINDEXED_LENGTH.getDefaultValue() - 1);
    // Now, we do another update and check that the Segment has no credit.
    AssertExtensions.assertThrows(TimeoutException.class, () -> segmentTracker.throttleIfNeeded(mockSegment, () -> CompletableFuture.completedFuture(null), updateSize).get(10, TimeUnit.MILLISECONDS));
}
Also used : lombok.val(lombok.val) SegmentMetadata(io.pravega.segmentstore.server.SegmentMetadata) SegmentType(io.pravega.segmentstore.contracts.SegmentType) DirectSegmentAccess(io.pravega.segmentstore.server.DirectSegmentAccess) Cleanup(lombok.Cleanup) Test(org.junit.Test)

Aggregations

SegmentMetadata (io.pravega.segmentstore.server.SegmentMetadata)58 UpdateableSegmentMetadata (io.pravega.segmentstore.server.UpdateableSegmentMetadata)41 lombok.val (lombok.val)25 ArrayList (java.util.ArrayList)24 Test (org.junit.Test)23 StreamSegmentNotExistsException (io.pravega.segmentstore.contracts.StreamSegmentNotExistsException)20 Duration (java.time.Duration)18 HashMap (java.util.HashMap)18 CompletableFuture (java.util.concurrent.CompletableFuture)18 Futures (io.pravega.common.concurrent.Futures)17 Collectors (java.util.stream.Collectors)17 Cleanup (lombok.Cleanup)17 Exceptions (io.pravega.common.Exceptions)15 StreamSegmentMetadata (io.pravega.segmentstore.server.containers.StreamSegmentMetadata)15 AtomicLong (java.util.concurrent.atomic.AtomicLong)15 UpdateableContainerMetadata (io.pravega.segmentstore.server.UpdateableContainerMetadata)14 Collection (java.util.Collection)14 Map (java.util.Map)14 SegmentProperties (io.pravega.segmentstore.contracts.SegmentProperties)13 MetadataBuilder (io.pravega.segmentstore.server.MetadataBuilder)13