Search in sources :

Example 1 with StorageMetadataCheckpointOperation

use of io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation in project pravega by pravega.

the class WriterStateTests method testLastRead.

/**
 * Tests {@link WriterState#getLastRead()} and {@link WriterState#setLastRead}.
 */
@Test
public void testLastRead() {
    val s = new WriterState();
    Assert.assertNull(s.getLastRead());
    val q = new LinkedList<Operation>();
    q.add(new MetadataCheckpointOperation());
    q.add(new StorageMetadataCheckpointOperation());
    s.setLastRead(q);
    Assert.assertSame(q, s.getLastRead());
    q.removeFirst();
    Assert.assertEquals(1, s.getLastRead().size());
    q.removeFirst();
    Assert.assertNull(s.getLastRead());
}
Also used : lombok.val(lombok.val) StorageMetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation) MetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.MetadataCheckpointOperation) StorageMetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation) LinkedList(java.util.LinkedList) Test(org.junit.Test)

Example 2 with StorageMetadataCheckpointOperation

use of io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation in project pravega by pravega.

the class DurableLogTests method testTruncateWithoutRecovery.

// endregion
// region Truncation
/**
 * Tests the truncate() method without doing any recovery.
 */
@Test
public void testTruncateWithoutRecovery() {
    int streamSegmentCount = 50;
    int appendsPerStreamSegment = 20;
    // Setup a DurableLog and start it.
    AtomicReference<TestDurableDataLog> dataLog = new AtomicReference<>();
    AtomicReference<Boolean> truncationOccurred = new AtomicReference<>();
    @Cleanup TestDurableDataLogFactory dataLogFactory = new TestDurableDataLogFactory(new InMemoryDurableDataLogFactory(MAX_DATA_LOG_APPEND_SIZE, executorService()), dataLog::set);
    @Cleanup Storage storage = InMemoryStorageFactory.newStorage(executorService());
    storage.initialize(1);
    UpdateableContainerMetadata metadata = new MetadataBuilder(CONTAINER_ID).build();
    @Cleanup CacheStorage cacheStorage = new DirectMemoryCache(Integer.MAX_VALUE);
    @Cleanup CacheManager cacheManager = new CacheManager(CachePolicy.INFINITE, cacheStorage, executorService());
    @Cleanup ReadIndex readIndex = new ContainerReadIndex(DEFAULT_READ_INDEX_CONFIG, metadata, storage, cacheManager, executorService());
    // First DurableLog. We use this for generating data.
    try (DurableLog durableLog = new DurableLog(ContainerSetup.defaultDurableLogConfig(), metadata, dataLogFactory, readIndex, executorService())) {
        durableLog.startAsync().awaitRunning();
        // Hook up a listener to figure out when truncation actually happens.
        dataLog.get().setTruncateCallback(seqNo -> truncationOccurred.set(true));
        // Generate some test data (we need to do this after we started the DurableLog because in the process of
        // recovery, it wipes away all existing metadata).
        Set<Long> streamSegmentIds = createStreamSegmentsWithOperations(streamSegmentCount, durableLog);
        List<Operation> queuedOperations = generateOperations(streamSegmentIds, new HashMap<>(), appendsPerStreamSegment, METADATA_CHECKPOINT_EVERY, false, false);
        // Process all operations.
        OperationWithCompletion.allOf(processOperations(queuedOperations, durableLog)).join();
        // Add a MetadataCheckpointOperation at the end, after everything else has processed. This ensures that it
        // sits in a DataFrame by itself and enables us to truncate everything at the end.
        processOperation(new MetadataCheckpointOperation(), durableLog).completion.join();
        awaitLastOperationAdded(durableLog, metadata);
        // Get a list of all the operations, before truncation.
        List<Operation> originalOperations = readUpToSequenceNumber(durableLog, metadata.getOperationSequenceNumber());
        boolean fullTruncationPossible = false;
        long currentTruncatedSeqNo = originalOperations.get(0).getSequenceNumber();
        // At the end, verify all operations and all entries in the DataLog were truncated.
        for (int i = 0; i < originalOperations.size(); i++) {
            Operation currentOperation = originalOperations.get(i);
            truncationOccurred.set(false);
            if (currentOperation instanceof MetadataCheckpointOperation) {
                // Perform the truncation.
                durableLog.truncate(currentOperation.getSequenceNumber(), TIMEOUT).join();
                awaitLastOperationAdded(durableLog, metadata);
                if (currentOperation.getSequenceNumber() != currentTruncatedSeqNo) {
                    // If the operation we're about to truncate to is actually the first in the log, then we should
                    // not be expecting any truncation.
                    Assert.assertTrue("No truncation occurred even though a valid Truncation Point was passed: " + currentOperation.getSequenceNumber(), truncationOccurred.get());
                    // Now verify that we get a StorageMetadataCheckpointOperation queued.
                    AssertExtensions.assertGreaterThan("Expected an operation to be queued as part of truncation.", 0, durableLog.getInMemoryOperationLog().size());
                    val readAfterTruncate = durableLog.read(1, TIMEOUT).join();
                    Assert.assertTrue("Expected a StorageMetadataCheckpointOperation to be queued as part of truncation.", readAfterTruncate.poll() instanceof StorageMetadataCheckpointOperation);
                }
                if (i == originalOperations.size()) {
                    // Sometimes the Truncation Point is on the same DataFrame as other data, and it's the last DataFrame;
                    // In that case, it cannot be truncated, since truncating the frame would mean losing the Checkpoint as well.
                    fullTruncationPossible = durableLog.getInMemoryOperationLog().size() == 0;
                }
            } else {
                // Verify we are not allowed to truncate on non-valid Truncation Points.
                AssertExtensions.assertSuppliedFutureThrows("DurableLog allowed truncation on a non-MetadataCheckpointOperation.", () -> durableLog.truncate(currentOperation.getSequenceNumber(), TIMEOUT), ex -> ex instanceof IllegalArgumentException);
                Assert.assertFalse("Not expecting a truncation to have occurred.", truncationOccurred.get());
            }
        }
        // Verify that we can still queue operations to the DurableLog and they can be read.
        // In this case we'll just queue some StreamSegmentMapOperations.
        StreamSegmentMapOperation newOp = new StreamSegmentMapOperation(StreamSegmentInformation.builder().name("foo").build());
        if (!fullTruncationPossible) {
            // We were not able to do a full truncation before. Do one now, since we are guaranteed to have a new DataFrame available.
            MetadataCheckpointOperation lastCheckpoint = new MetadataCheckpointOperation();
            durableLog.add(lastCheckpoint, OperationPriority.Normal, TIMEOUT).join();
            awaitLastOperationAdded(durableLog, metadata);
            durableLog.truncate(lastCheckpoint.getSequenceNumber(), TIMEOUT).join();
        }
        durableLog.add(newOp, OperationPriority.Normal, TIMEOUT).join();
        awaitLastOperationAdded(durableLog, metadata);
        // Full Checkpoint + Storage Checkpoint (auto-added)+ new op
        final int expectedOperationCount = 3;
        List<Operation> newOperations = readUpToSequenceNumber(durableLog, metadata.getOperationSequenceNumber());
        Assert.assertEquals("Unexpected number of operations added after full truncation.", expectedOperationCount, newOperations.size());
        Assert.assertTrue("Expecting the first operation after full truncation to be a MetadataCheckpointOperation.", newOperations.get(0) instanceof MetadataCheckpointOperation);
        Assert.assertTrue("Expecting a StorageMetadataCheckpointOperation to be auto-added after full truncation.", newOperations.get(1) instanceof StorageMetadataCheckpointOperation);
        Assert.assertEquals("Unexpected Operation encountered after full truncation.", newOp, newOperations.get(2));
        // Stop the processor.
        durableLog.stopAsync().awaitTerminated();
    }
}
Also used : DirectMemoryCache(io.pravega.segmentstore.storage.cache.DirectMemoryCache) TestDurableDataLog(io.pravega.segmentstore.server.TestDurableDataLog) StorageMetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation) UpdateableContainerMetadata(io.pravega.segmentstore.server.UpdateableContainerMetadata) StorageMetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation) MergeSegmentOperation(io.pravega.segmentstore.server.logs.operations.MergeSegmentOperation) Operation(io.pravega.segmentstore.server.logs.operations.Operation) StreamSegmentMapOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentMapOperation) MetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.MetadataCheckpointOperation) CachedStreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.CachedStreamSegmentAppendOperation) StorageOperation(io.pravega.segmentstore.server.logs.operations.StorageOperation) StreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentAppendOperation) DeleteSegmentOperation(io.pravega.segmentstore.server.logs.operations.DeleteSegmentOperation) StreamSegmentSealOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentSealOperation) Cleanup(lombok.Cleanup) CacheManager(io.pravega.segmentstore.server.CacheManager) CacheStorage(io.pravega.segmentstore.storage.cache.CacheStorage) lombok.val(lombok.val) StorageMetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation) MetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.MetadataCheckpointOperation) MetadataBuilder(io.pravega.segmentstore.server.MetadataBuilder) ContainerReadIndex(io.pravega.segmentstore.server.reading.ContainerReadIndex) ReadIndex(io.pravega.segmentstore.server.ReadIndex) AtomicReference(java.util.concurrent.atomic.AtomicReference) StreamSegmentMapOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentMapOperation) InMemoryDurableDataLogFactory(io.pravega.segmentstore.storage.mocks.InMemoryDurableDataLogFactory) ContainerReadIndex(io.pravega.segmentstore.server.reading.ContainerReadIndex) Storage(io.pravega.segmentstore.storage.Storage) CacheStorage(io.pravega.segmentstore.storage.cache.CacheStorage) TestDurableDataLogFactory(io.pravega.segmentstore.server.TestDurableDataLogFactory) Test(org.junit.Test)

Example 3 with StorageMetadataCheckpointOperation

use of io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation in project pravega by pravega.

the class ContainerMetadataUpdateTransaction method preProcessOperation.

/**
 * Pre-processes the given Operation. See OperationMetadataUpdater.preProcessOperation for more details on behavior.
 *
 * @param operation The operation to pre-process.
 * @throws ContainerException     If the given operation was rejected given the current state of the container metadata.
 * @throws StreamSegmentException If the given operation was incompatible with the current state of the Segment.
 *                                For example: StreamSegmentNotExistsException, StreamSegmentSealedException or
 *                                StreamSegmentMergedException.
 */
void preProcessOperation(Operation operation) throws ContainerException, StreamSegmentException {
    checkNotSealed();
    if (operation instanceof SegmentOperation) {
        val segmentMetadata = getSegmentUpdateTransaction(((SegmentOperation) operation).getStreamSegmentId());
        if (segmentMetadata.isDeleted()) {
            throw new StreamSegmentNotExistsException(segmentMetadata.getName());
        }
        if (operation instanceof StreamSegmentAppendOperation) {
            segmentMetadata.preProcessOperation((StreamSegmentAppendOperation) operation);
        } else if (operation instanceof StreamSegmentSealOperation) {
            segmentMetadata.preProcessOperation((StreamSegmentSealOperation) operation);
        } else if (operation instanceof MergeSegmentOperation) {
            MergeSegmentOperation mbe = (MergeSegmentOperation) operation;
            SegmentMetadataUpdateTransaction sourceMetadata = getSegmentUpdateTransaction(mbe.getSourceSegmentId());
            sourceMetadata.preProcessAsSourceSegment(mbe);
            segmentMetadata.preProcessAsTargetSegment(mbe, sourceMetadata);
        } else if (operation instanceof UpdateAttributesOperation) {
            segmentMetadata.preProcessOperation((UpdateAttributesOperation) operation);
        } else if (operation instanceof StreamSegmentTruncateOperation) {
            segmentMetadata.preProcessOperation((StreamSegmentTruncateOperation) operation);
        } else if (operation instanceof DeleteSegmentOperation) {
            segmentMetadata.preProcessOperation((DeleteSegmentOperation) operation);
        }
    }
    if (operation instanceof MetadataCheckpointOperation) {
        // MetadataCheckpointOperations do not require preProcess and accept; they can be handled in a single stage.
        processMetadataOperation((MetadataCheckpointOperation) operation);
    } else if (operation instanceof StorageMetadataCheckpointOperation) {
        // StorageMetadataCheckpointOperation do not require preProcess and accept; they can be handled in a single stage.
        processMetadataOperation((StorageMetadataCheckpointOperation) operation);
    } else if (operation instanceof StreamSegmentMapOperation) {
        preProcessMetadataOperation((StreamSegmentMapOperation) operation);
    }
}
Also used : lombok.val(lombok.val) StreamSegmentSealOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentSealOperation) StorageMetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation) MetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.MetadataCheckpointOperation) UpdateAttributesOperation(io.pravega.segmentstore.server.logs.operations.UpdateAttributesOperation) MergeSegmentOperation(io.pravega.segmentstore.server.logs.operations.MergeSegmentOperation) SegmentOperation(io.pravega.segmentstore.server.SegmentOperation) DeleteSegmentOperation(io.pravega.segmentstore.server.logs.operations.DeleteSegmentOperation) StorageMetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation) StreamSegmentMapOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentMapOperation) StreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentAppendOperation) StreamSegmentNotExistsException(io.pravega.segmentstore.contracts.StreamSegmentNotExistsException) MergeSegmentOperation(io.pravega.segmentstore.server.logs.operations.MergeSegmentOperation) DeleteSegmentOperation(io.pravega.segmentstore.server.logs.operations.DeleteSegmentOperation) StreamSegmentTruncateOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentTruncateOperation)

Example 4 with StorageMetadataCheckpointOperation

use of io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation in project pravega by pravega.

the class DurableDataLogRepairCommand method createUserDefinedOperation.

/**
 * Guides the user to generate a new {@link Operation} that will eventually modify the Original Log.
 *
 * @return New {@link Operation} to be added in the Original Log.
 */
@VisibleForTesting
Operation createUserDefinedOperation() {
    Operation result;
    final String operations = "[DeleteSegmentOperation|MergeSegmentOperation|MetadataCheckpointOperation|" + "StorageMetadataCheckpointOperation|StreamSegmentAppendOperation|StreamSegmentMapOperation|" + "StreamSegmentSealOperation|StreamSegmentTruncateOperation|UpdateAttributesOperation]";
    switch(getStringUserInput("Type one of the following Operations to instantiate: " + operations)) {
        case "DeleteSegmentOperation":
            long segmentId = getLongUserInput("Input Segment Id for DeleteSegmentOperation:");
            result = new DeleteSegmentOperation(segmentId);
            long offset = getLongUserInput("Input Segment Offset for DeleteSegmentOperation:");
            ((DeleteSegmentOperation) result).setStreamSegmentOffset(offset);
            break;
        case "MergeSegmentOperation":
            long targetSegmentId = getLongUserInput("Input Target Segment Id for MergeSegmentOperation:");
            long sourceSegmentId = getLongUserInput("Input Source Segment Id for MergeSegmentOperation:");
            result = new MergeSegmentOperation(targetSegmentId, sourceSegmentId, createAttributeUpdateCollection());
            offset = getLongUserInput("Input Segment Offset for MergeSegmentOperation:");
            ((MergeSegmentOperation) result).setStreamSegmentOffset(offset);
            break;
        case "MetadataCheckpointOperation":
            result = new MetadataCheckpointOperation();
            ((MetadataCheckpointOperation) result).setContents(createOperationContents());
            break;
        case "StorageMetadataCheckpointOperation":
            result = new StorageMetadataCheckpointOperation();
            ((StorageMetadataCheckpointOperation) result).setContents(createOperationContents());
            break;
        case "StreamSegmentAppendOperation":
            segmentId = getLongUserInput("Input Segment Id for StreamSegmentAppendOperation:");
            offset = getLongUserInput("Input Segment Offset for StreamSegmentAppendOperation:");
            result = new StreamSegmentAppendOperation(segmentId, offset, createOperationContents(), createAttributeUpdateCollection());
            break;
        case "StreamSegmentMapOperation":
            result = new StreamSegmentMapOperation(createSegmentProperties());
            break;
        case "StreamSegmentSealOperation":
            segmentId = getLongUserInput("Input Segment Id for StreamSegmentSealOperation:");
            result = new StreamSegmentSealOperation(segmentId);
            offset = getLongUserInput("Input Segment Offset for StreamSegmentSealOperation:");
            ((StreamSegmentSealOperation) result).setStreamSegmentOffset(offset);
            break;
        case "StreamSegmentTruncateOperation":
            segmentId = getLongUserInput("Input Segment Id for StreamSegmentTruncateOperation:");
            offset = getLongUserInput("Input Offset for StreamSegmentTruncateOperation:");
            result = new StreamSegmentTruncateOperation(segmentId, offset);
            break;
        case "UpdateAttributesOperation":
            segmentId = getLongUserInput("Input Segment Id for UpdateAttributesOperation:");
            result = new UpdateAttributesOperation(segmentId, createAttributeUpdateCollection());
            break;
        default:
            output("Invalid operation, please select one of " + operations);
            throw new UnsupportedOperationException();
    }
    return result;
}
Also used : StorageMetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation) MetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.MetadataCheckpointOperation) StreamSegmentSealOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentSealOperation) UpdateAttributesOperation(io.pravega.segmentstore.server.logs.operations.UpdateAttributesOperation) StorageMetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation) StreamSegmentMapOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentMapOperation) StorageMetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation) MergeSegmentOperation(io.pravega.segmentstore.server.logs.operations.MergeSegmentOperation) Operation(io.pravega.segmentstore.server.logs.operations.Operation) StreamSegmentTruncateOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentTruncateOperation) StreamSegmentMapOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentMapOperation) UpdateAttributesOperation(io.pravega.segmentstore.server.logs.operations.UpdateAttributesOperation) MetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.MetadataCheckpointOperation) StreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentAppendOperation) DeleteSegmentOperation(io.pravega.segmentstore.server.logs.operations.DeleteSegmentOperation) StreamSegmentSealOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentSealOperation) StreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentAppendOperation) MergeSegmentOperation(io.pravega.segmentstore.server.logs.operations.MergeSegmentOperation) DeleteSegmentOperation(io.pravega.segmentstore.server.logs.operations.DeleteSegmentOperation) StreamSegmentTruncateOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentTruncateOperation) VisibleForTesting(com.google.common.annotations.VisibleForTesting)

Example 5 with StorageMetadataCheckpointOperation

use of io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation in project pravega by pravega.

the class ContainerMetadataUpdateTransactionTests method testOperationConcurrentSerializationDeletion.

@Test
public void testOperationConcurrentSerializationDeletion() throws ContainerException, StreamSegmentException {
    CompletableFuture<Void> deletion = new CompletableFuture<>();
    InstrumentedContainerMetadata metadata = new InstrumentedContainerMetadata(CONTAINER_ID, 1000, deletion);
    // Add some segments to the metadata.
    populateMetadata(metadata);
    metadata.getGetAllStreamSegmentIdsFuture().thenRun(() -> {
        // Cannot call getStreamSegmentMetadata because it is instrumented and will block, so we must manually construct it.
        UpdateableSegmentMetadata segment = new StreamSegmentMetadata(SEGMENT_NAME, SEGMENT_ID, 0);
        // Ensures it is eligible for eviction.
        segment.markDeleted();
        metadata.cleanup(Set.of(segment), SEGMENT_LENGTH);
    }).thenRun(() -> {
        deletion.complete(null);
    });
    ContainerMetadataUpdateTransaction transaction = createUpdateTransaction(metadata);
    StorageMetadataCheckpointOperation checkpoint = createStorageMetadataCheckpoint();
    // If successful this operation should not throw a NullPointerException.
    transaction.preProcessOperation(checkpoint);
}
Also used : CompletableFuture(java.util.concurrent.CompletableFuture) UpdateableSegmentMetadata(io.pravega.segmentstore.server.UpdateableSegmentMetadata) StreamSegmentMetadata(io.pravega.segmentstore.server.containers.StreamSegmentMetadata) StorageMetadataCheckpointOperation(io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation) Test(org.junit.Test)

Aggregations

StorageMetadataCheckpointOperation (io.pravega.segmentstore.server.logs.operations.StorageMetadataCheckpointOperation)7 MetadataCheckpointOperation (io.pravega.segmentstore.server.logs.operations.MetadataCheckpointOperation)6 StreamSegmentAppendOperation (io.pravega.segmentstore.server.logs.operations.StreamSegmentAppendOperation)4 Test (org.junit.Test)4 DeleteSegmentOperation (io.pravega.segmentstore.server.logs.operations.DeleteSegmentOperation)3 MergeSegmentOperation (io.pravega.segmentstore.server.logs.operations.MergeSegmentOperation)3 Operation (io.pravega.segmentstore.server.logs.operations.Operation)3 StreamSegmentMapOperation (io.pravega.segmentstore.server.logs.operations.StreamSegmentMapOperation)3 StreamSegmentSealOperation (io.pravega.segmentstore.server.logs.operations.StreamSegmentSealOperation)3 lombok.val (lombok.val)3 VisibleForTesting (com.google.common.annotations.VisibleForTesting)2 ReadIndex (io.pravega.segmentstore.server.ReadIndex)2 UpdateableContainerMetadata (io.pravega.segmentstore.server.UpdateableContainerMetadata)2 StreamSegmentTruncateOperation (io.pravega.segmentstore.server.logs.operations.StreamSegmentTruncateOperation)2 UpdateAttributesOperation (io.pravega.segmentstore.server.logs.operations.UpdateAttributesOperation)2 CompletableFuture (java.util.concurrent.CompletableFuture)2 Preconditions (com.google.common.base.Preconditions)1 AbstractService (com.google.common.util.concurrent.AbstractService)1 AdminCommandState (io.pravega.cli.admin.AdminCommandState)1 CommandArgs (io.pravega.cli.admin.CommandArgs)1