Search in sources :

Example 46 with Supplier

use of java.util.function.Supplier in project pravega by pravega.

the class OrderedItemProcessorTests method testCapacityExceeded.

/**
 * Tests a scenario where we add more items than the capacity allows. We want to verify that the items are queued up
 * and when their time comes, they get processed in order.
 */
@Test
public void testCapacityExceeded() {
    final int maxDelayMillis = 20;
    final int itemCount = 20 * CAPACITY;
    val processedItems = Collections.synchronizedCollection(new HashSet<Integer>());
    val processFuture = new CompletableFuture<Void>();
    // Each item wait for a signal to complete. When the signal arrives, each takes a random time to complete.
    val rnd = new Random(0);
    Supplier<Duration> delaySupplier = () -> Duration.ofMillis(rnd.nextInt(maxDelayMillis));
    Function<Integer, CompletableFuture<Integer>> itemProcessor = i -> {
        if (!processedItems.add(i)) {
            Assert.fail("Duplicate item detected: " + i);
        }
        CompletableFuture<Integer> result = new CompletableFuture<>();
        processFuture.thenComposeAsync(v -> Futures.delayedFuture(delaySupplier.get(), executorService()), executorService()).whenCompleteAsync((r, ex) -> result.complete(TRANSFORMER.apply(i)));
        return result;
    };
    val resultFutures = new ArrayList<CompletableFuture<Integer>>();
    @Cleanup val p = new TestProcessor(CAPACITY, itemProcessor, executorService());
    // Fill up to capacity, and beyond.
    for (int i = 0; i < itemCount; i++) {
        resultFutures.add(p.process(i));
        if (i >= CAPACITY) {
            Assert.assertFalse("Item has been immediately processed when over capacity: " + i, processedItems.contains(i));
        }
    }
    // Finish up the items, and verify new ones are being processed.
    processFuture.complete(null);
    // Verify they have been executed in order.
    val results = Futures.allOfWithResults(resultFutures).join();
    for (int i = 0; i < results.size(); i++) {
        Assert.assertEquals("Unexpected result at index " + i, TRANSFORMER.apply(i), results.get(i));
    }
}
Also used : lombok.val(lombok.val) ObjectClosedException(io.pravega.common.ObjectClosedException) Setter(lombok.Setter) Getter(lombok.Getter) AssertExtensions(io.pravega.test.common.AssertExtensions) Cleanup(lombok.Cleanup) Random(java.util.Random) CompletableFuture(java.util.concurrent.CompletableFuture) Function(java.util.function.Function) Supplier(java.util.function.Supplier) ArrayList(java.util.ArrayList) HashSet(java.util.HashSet) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Duration(java.time.Duration) Timeout(org.junit.rules.Timeout) Executor(java.util.concurrent.Executor) Semaphore(java.util.concurrent.Semaphore) IntentionalException(io.pravega.test.common.IntentionalException) lombok.val(lombok.val) Test(org.junit.Test) TimeUnit(java.util.concurrent.TimeUnit) Rule(org.junit.Rule) ThreadPooledTestSuite(io.pravega.test.common.ThreadPooledTestSuite) Assert(org.junit.Assert) Collections(java.util.Collections) Futures(io.pravega.common.concurrent.Futures) ArrayList(java.util.ArrayList) Duration(java.time.Duration) Cleanup(lombok.Cleanup) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) CompletableFuture(java.util.concurrent.CompletableFuture) Random(java.util.Random) Test(org.junit.Test)

Example 47 with Supplier

use of java.util.function.Supplier in project pravega by pravega.

the class ControllerEventProcessors method handleOrphanedReaders.

private CompletableFuture<Void> handleOrphanedReaders(final EventProcessorGroup<? extends ControllerEvent> group, final Supplier<Set<String>> processes) {
    return withRetriesAsync(() -> CompletableFuture.supplyAsync(() -> {
        try {
            return group.getProcesses();
        } catch (CheckpointStoreException e) {
            if (e.getType().equals(CheckpointStoreException.Type.NoNode)) {
                return Collections.<String>emptySet();
            }
            throw new CompletionException(e);
        }
    }, executor), RETRYABLE_PREDICATE, Integer.MAX_VALUE, executor).thenComposeAsync(groupProcesses -> withRetriesAsync(() -> CompletableFuture.supplyAsync(() -> {
        try {
            return new ImmutablePair<>(processes.get(), groupProcesses);
        } catch (Exception e) {
            log.error(String.format("Error fetching current processes%s", group.toString()), e);
            throw new CompletionException(e);
        }
    }, executor), RETRYABLE_PREDICATE, Integer.MAX_VALUE, executor)).thenComposeAsync(pair -> {
        Set<String> activeProcesses = pair.getLeft();
        Set<String> registeredProcesses = pair.getRight();
        if (registeredProcesses == null || registeredProcesses.isEmpty()) {
            return CompletableFuture.completedFuture(null);
        }
        if (activeProcesses != null) {
            registeredProcesses.removeAll(activeProcesses);
        }
        List<CompletableFuture<Void>> futureList = new ArrayList<>();
        for (String process : registeredProcesses) {
            futureList.add(withRetriesAsync(() -> CompletableFuture.runAsync(() -> {
                try {
                    group.notifyProcessFailure(process);
                } catch (CheckpointStoreException e) {
                    log.error(String.format("Error notifying failure of process=%s in event processor group %s", process, group.toString()), e);
                    throw new CompletionException(e);
                }
            }, executor), RETRYABLE_PREDICATE, Integer.MAX_VALUE, executor));
        }
        return Futures.allOf(futureList);
    });
}
Also used : CommitEvent(io.pravega.shared.controller.event.CommitEvent) Retry(io.pravega.common.util.Retry) ReaderGroupManagerImpl(io.pravega.client.admin.impl.ReaderGroupManagerImpl) JavaSerializer(io.pravega.client.stream.impl.JavaSerializer) CheckpointStore(io.pravega.controller.store.checkpoint.CheckpointStore) StreamConfiguration(io.pravega.client.stream.StreamConfiguration) CheckpointStoreException(io.pravega.controller.store.checkpoint.CheckpointStoreException) ClientFactoryImpl(io.pravega.client.stream.impl.ClientFactoryImpl) AbortRequestHandler(io.pravega.controller.server.eventProcessor.requesthandlers.AbortRequestHandler) AutoScaleTask(io.pravega.controller.server.eventProcessor.requesthandlers.AutoScaleTask) RETRYABLE_PREDICATE(io.pravega.controller.util.RetryHelper.RETRYABLE_PREDICATE) ExceptionHandler(io.pravega.controller.eventProcessor.ExceptionHandler) RetryHelper.withRetriesAsync(io.pravega.controller.util.RetryHelper.withRetriesAsync) EventProcessorConfig(io.pravega.controller.eventProcessor.EventProcessorConfig) SealStreamTask(io.pravega.controller.server.eventProcessor.requesthandlers.SealStreamTask) Set(java.util.Set) CompletionException(java.util.concurrent.CompletionException) ScaleOperationTask(io.pravega.controller.server.eventProcessor.requesthandlers.ScaleOperationTask) ControllerEvent(io.pravega.shared.controller.event.ControllerEvent) Slf4j(lombok.extern.slf4j.Slf4j) List(java.util.List) ConcurrentEventProcessor(io.pravega.controller.eventProcessor.impl.ConcurrentEventProcessor) EventProcessorSystemImpl(io.pravega.controller.eventProcessor.impl.EventProcessorSystemImpl) Config(io.pravega.controller.util.Config) UpdateStreamTask(io.pravega.controller.server.eventProcessor.requesthandlers.UpdateStreamTask) ClientFactory(io.pravega.client.ClientFactory) Controller(io.pravega.client.stream.impl.Controller) StreamMetadataStore(io.pravega.controller.store.stream.StreamMetadataStore) Futures(io.pravega.common.concurrent.Futures) TruncateStreamTask(io.pravega.controller.server.eventProcessor.requesthandlers.TruncateStreamTask) SegmentHelper(io.pravega.controller.server.SegmentHelper) EventProcessorGroupConfigImpl(io.pravega.controller.eventProcessor.impl.EventProcessorGroupConfigImpl) CompletableFuture(java.util.concurrent.CompletableFuture) Supplier(java.util.function.Supplier) ArrayList(java.util.ArrayList) AbortEvent(io.pravega.shared.controller.event.AbortEvent) AbstractIdleService(com.google.common.util.concurrent.AbstractIdleService) ScheduledExecutorService(java.util.concurrent.ScheduledExecutorService) StreamMetadataTasks(io.pravega.controller.task.Stream.StreamMetadataTasks) ConnectionFactory(io.pravega.client.netty.impl.ConnectionFactory) Serializer(io.pravega.client.stream.Serializer) LoggerHelpers(io.pravega.common.LoggerHelpers) FailoverSweeper(io.pravega.controller.fault.FailoverSweeper) ImmutablePair(org.apache.commons.lang3.tuple.ImmutablePair) EventProcessorGroupConfig(io.pravega.controller.eventProcessor.EventProcessorGroupConfig) EventProcessorSystem(io.pravega.controller.eventProcessor.EventProcessorSystem) HostControllerStore(io.pravega.controller.store.host.HostControllerStore) StreamTransactionMetadataTasks(io.pravega.controller.task.Stream.StreamTransactionMetadataTasks) DeleteStreamTask(io.pravega.controller.server.eventProcessor.requesthandlers.DeleteStreamTask) EventProcessorGroup(io.pravega.controller.eventProcessor.EventProcessorGroup) StreamRequestHandler(io.pravega.controller.server.eventProcessor.requesthandlers.StreamRequestHandler) VisibleForTesting(com.google.common.annotations.VisibleForTesting) Collections(java.util.Collections) ScalingPolicy(io.pravega.client.stream.ScalingPolicy) CompletableFuture(java.util.concurrent.CompletableFuture) CompletionException(java.util.concurrent.CompletionException) ArrayList(java.util.ArrayList) CheckpointStoreException(io.pravega.controller.store.checkpoint.CheckpointStoreException) CheckpointStoreException(io.pravega.controller.store.checkpoint.CheckpointStoreException) CompletionException(java.util.concurrent.CompletionException)

Example 48 with Supplier

use of java.util.function.Supplier in project pravega by pravega.

the class SegmentAggregatorTests method testMergeWithStorageErrors.

/**
 * Tests the flush() method with Append and MergeTransactionOperations.
 */
@Test
public void testMergeWithStorageErrors() throws Exception {
    // Storage Errors
    // This is number of appends per Segment/Transaction - there will be a lot of appends here.
    final int appendCount = 100;
    final int failSyncEvery = 2;
    final int failAsyncEvery = 3;
    final WriterConfig config = WriterConfig.builder().with(WriterConfig.FLUSH_THRESHOLD_BYTES, // Extra high length threshold.
    appendCount * 50).with(WriterConfig.FLUSH_THRESHOLD_MILLIS, 1000L).with(WriterConfig.MAX_FLUSH_SIZE_BYTES, 10000).with(WriterConfig.MIN_READ_TIMEOUT_MILLIS, 10L).build();
    @Cleanup TestContext context = new TestContext(config);
    // Create and initialize all segments.
    context.storage.create(context.segmentAggregator.getMetadata().getName(), TIMEOUT).join();
    context.segmentAggregator.initialize(TIMEOUT).join();
    for (SegmentAggregator a : context.transactionAggregators) {
        context.storage.create(a.getMetadata().getName(), TIMEOUT).join();
        a.initialize(TIMEOUT).join();
    }
    // Store written data by segment - so we can check it later.
    HashMap<Long, ByteArrayOutputStream> dataBySegment = new HashMap<>();
    // Add a few appends to each Transaction aggregator and to the parent aggregator and seal all Transactions.
    for (int i = 0; i < context.transactionAggregators.length; i++) {
        SegmentAggregator transactionAggregator = context.transactionAggregators[i];
        long transactionId = transactionAggregator.getMetadata().getId();
        ByteArrayOutputStream writtenData = new ByteArrayOutputStream();
        dataBySegment.put(transactionId, writtenData);
        for (int appendId = 0; appendId < appendCount; appendId++) {
            StorageOperation appendOp = generateAppendAndUpdateMetadata(appendId, transactionId, context);
            transactionAggregator.add(appendOp);
            getAppendData(appendOp, writtenData, context);
        }
        transactionAggregator.add(generateSealAndUpdateMetadata(transactionId, context));
    }
    // Merge all the Transactions in the parent Segment.
    @Cleanup ByteArrayOutputStream parentData = new ByteArrayOutputStream();
    for (int transIndex = 0; transIndex < context.transactionAggregators.length; transIndex++) {
        // Merge this Transaction into the parent & record its data in the final parent data array.
        long transactionId = context.transactionAggregators[transIndex].getMetadata().getId();
        context.segmentAggregator.add(generateMergeTransactionAndUpdateMetadata(transactionId, context));
        ByteArrayOutputStream transactionData = dataBySegment.get(transactionId);
        parentData.write(transactionData.toByteArray());
        transactionData.close();
    }
    // Have the writes fail every few attempts with a well known exception.
    AtomicReference<IntentionalException> setException = new AtomicReference<>();
    Supplier<Exception> exceptionSupplier = () -> {
        IntentionalException ex = new IntentionalException(Long.toString(context.timer.getElapsedMillis()));
        setException.set(ex);
        return ex;
    };
    context.storage.setConcatSyncErrorInjector(new ErrorInjector<>(count -> count % failSyncEvery == 0, exceptionSupplier));
    context.storage.setConcatAsyncErrorInjector(new ErrorInjector<>(count -> count % failAsyncEvery == 0, exceptionSupplier));
    // Flush all the Aggregators, while checking that the right errors get handled and can be recovered from.
    tryFlushAllSegments(context, () -> setException.set(null), setException::get);
    // Verify that all Transactions are now fully merged.
    for (SegmentAggregator transactionAggregator : context.transactionAggregators) {
        SegmentMetadata transactionMetadata = transactionAggregator.getMetadata();
        Assert.assertTrue("Merged Transaction was not marked as deleted in metadata.", transactionMetadata.isDeleted());
        Assert.assertFalse("Merged Transaction still exists in storage.", context.storage.exists(transactionMetadata.getName(), TIMEOUT).join());
    }
    // Verify that in the end, the contents of the parents is as expected.
    byte[] expectedData = parentData.toByteArray();
    byte[] actualData = new byte[expectedData.length];
    long storageLength = context.storage.getStreamSegmentInfo(context.segmentAggregator.getMetadata().getName(), TIMEOUT).join().getLength();
    Assert.assertEquals("Unexpected number of bytes flushed/merged to Storage.", expectedData.length, storageLength);
    context.storage.read(readHandle(context.segmentAggregator.getMetadata().getName()), 0, actualData, 0, actualData.length, TIMEOUT).join();
    Assert.assertArrayEquals("Unexpected data written to storage.", expectedData, actualData);
}
Also used : Arrays(java.util.Arrays) StreamSegmentNotExistsException(io.pravega.segmentstore.contracts.StreamSegmentNotExistsException) AssertExtensions(io.pravega.test.common.AssertExtensions) RequiredArgsConstructor(lombok.RequiredArgsConstructor) Cleanup(lombok.Cleanup) Random(java.util.Random) UpdateableSegmentMetadata(io.pravega.segmentstore.server.UpdateableSegmentMetadata) SegmentProperties(io.pravega.segmentstore.contracts.SegmentProperties) SegmentHandle(io.pravega.segmentstore.storage.SegmentHandle) ByteArrayInputStream(java.io.ByteArrayInputStream) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Duration(java.time.Duration) Map(java.util.Map) Operation(io.pravega.segmentstore.server.logs.operations.Operation) StreamSegmentTruncateOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentTruncateOperation) InMemoryStorage(io.pravega.segmentstore.storage.mocks.InMemoryStorage) Collectors(java.util.stream.Collectors) ErrorInjector(io.pravega.test.common.ErrorInjector) IOUtils(org.apache.commons.io.IOUtils) ThreadPooledTestSuite(io.pravega.test.common.ThreadPooledTestSuite) BadOffsetException(io.pravega.segmentstore.contracts.BadOffsetException) TestStorage(io.pravega.segmentstore.server.TestStorage) MetadataBuilder(io.pravega.segmentstore.server.MetadataBuilder) ByteArrayOutputStream(java.io.ByteArrayOutputStream) Exceptions(io.pravega.common.Exceptions) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) HashMap(java.util.HashMap) AtomicReference(java.util.concurrent.atomic.AtomicReference) Supplier(java.util.function.Supplier) ArrayList(java.util.ArrayList) HashSet(java.util.HashSet) UpdateableContainerMetadata(io.pravega.segmentstore.server.UpdateableContainerMetadata) SegmentMetadata(io.pravega.segmentstore.server.SegmentMetadata) Timeout(org.junit.rules.Timeout) OutputStream(java.io.OutputStream) ManualTimer(io.pravega.segmentstore.server.ManualTimer) FixedByteArrayOutputStream(io.pravega.common.io.FixedByteArrayOutputStream) IntentionalException(io.pravega.test.common.IntentionalException) lombok.val(lombok.val) IOException(java.io.IOException) Test(org.junit.Test) TimeUnit(java.util.concurrent.TimeUnit) AtomicLong(java.util.concurrent.atomic.AtomicLong) AbstractMap(java.util.AbstractMap) Rule(org.junit.Rule) CachedStreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.CachedStreamSegmentAppendOperation) StorageOperation(io.pravega.segmentstore.server.logs.operations.StorageOperation) TreeMap(java.util.TreeMap) StreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentAppendOperation) DataCorruptionException(io.pravega.segmentstore.server.DataCorruptionException) Assert(org.junit.Assert) MergeTransactionOperation(io.pravega.segmentstore.server.logs.operations.MergeTransactionOperation) StreamSegmentSealOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentSealOperation) InputStream(java.io.InputStream) HashMap(java.util.HashMap) AtomicReference(java.util.concurrent.atomic.AtomicReference) ByteArrayOutputStream(java.io.ByteArrayOutputStream) FixedByteArrayOutputStream(io.pravega.common.io.FixedByteArrayOutputStream) Cleanup(lombok.Cleanup) IntentionalException(io.pravega.test.common.IntentionalException) StreamSegmentNotExistsException(io.pravega.segmentstore.contracts.StreamSegmentNotExistsException) BadOffsetException(io.pravega.segmentstore.contracts.BadOffsetException) IntentionalException(io.pravega.test.common.IntentionalException) IOException(java.io.IOException) DataCorruptionException(io.pravega.segmentstore.server.DataCorruptionException) UpdateableSegmentMetadata(io.pravega.segmentstore.server.UpdateableSegmentMetadata) SegmentMetadata(io.pravega.segmentstore.server.SegmentMetadata) AtomicLong(java.util.concurrent.atomic.AtomicLong) StorageOperation(io.pravega.segmentstore.server.logs.operations.StorageOperation) BadOffsetException(io.pravega.segmentstore.contracts.BadOffsetException) Test(org.junit.Test)

Example 49 with Supplier

use of java.util.function.Supplier in project pravega by pravega.

the class SegmentAggregatorTests method testSealWithStorageErrors.

/**
 * Tests the flush() method with Append and StreamSegmentSealOperations when there are Storage errors.
 */
@Test
public void testSealWithStorageErrors() throws Exception {
    // Add some appends and seal, and then flush together. Verify that everything got flushed in one go.
    final int appendCount = 1000;
    final WriterConfig config = WriterConfig.builder().with(WriterConfig.FLUSH_THRESHOLD_BYTES, // Extra high length threshold.
    appendCount * 50).with(WriterConfig.FLUSH_THRESHOLD_MILLIS, 1000L).with(WriterConfig.MAX_FLUSH_SIZE_BYTES, 10000).with(WriterConfig.MIN_READ_TIMEOUT_MILLIS, 10L).build();
    @Cleanup TestContext context = new TestContext(config);
    context.storage.create(context.segmentAggregator.getMetadata().getName(), TIMEOUT).join();
    context.segmentAggregator.initialize(TIMEOUT).join();
    @Cleanup ByteArrayOutputStream writtenData = new ByteArrayOutputStream();
    // Part 1: flush triggered by accumulated size.
    for (int i = 0; i < appendCount; i++) {
        // Add another operation and record its length (not bothering with flushing here; testFlushSeal() covers that).
        StorageOperation appendOp = generateAppendAndUpdateMetadata(i, SEGMENT_ID, context);
        context.segmentAggregator.add(appendOp);
        getAppendData(appendOp, writtenData, context);
    }
    // Generate and add a Seal Operation.
    StorageOperation sealOp = generateSealAndUpdateMetadata(SEGMENT_ID, context);
    context.segmentAggregator.add(sealOp);
    // Have the writes fail every few attempts with a well known exception.
    AtomicBoolean generateSyncException = new AtomicBoolean(true);
    AtomicBoolean generateAsyncException = new AtomicBoolean(true);
    AtomicReference<IntentionalException> setException = new AtomicReference<>();
    Supplier<Exception> exceptionSupplier = () -> {
        IntentionalException ex = new IntentionalException(Long.toString(context.timer.getElapsedMillis()));
        setException.set(ex);
        return ex;
    };
    context.storage.setSealSyncErrorInjector(new ErrorInjector<>(count -> generateSyncException.getAndSet(false), exceptionSupplier));
    context.storage.setSealAsyncErrorInjector(new ErrorInjector<>(count -> generateAsyncException.getAndSet(false), exceptionSupplier));
    // Call flush and verify that the entire Aggregator got flushed and the Seal got persisted to Storage.
    int attemptCount = 4;
    for (int i = 0; i < attemptCount; i++) {
        // Repeat a number of times, at least once should work.
        setException.set(null);
        try {
            FlushResult flushResult = context.segmentAggregator.flush(TIMEOUT).join();
            Assert.assertNull("An exception was expected, but none was thrown.", setException.get());
            Assert.assertNotNull("No FlushResult provided.", flushResult);
        } catch (Exception ex) {
            if (setException.get() != null) {
                Assert.assertEquals("Unexpected exception thrown.", setException.get(), Exceptions.unwrap(ex));
            } else {
                // Not expecting any exception this time.
                throw ex;
            }
        }
        if (!generateAsyncException.get() && !generateSyncException.get() && setException.get() == null) {
            // We are done. We got at least one through.
            break;
        }
    }
    // Verify data.
    byte[] expectedData = writtenData.toByteArray();
    byte[] actualData = new byte[expectedData.length];
    SegmentProperties storageInfo = context.storage.getStreamSegmentInfo(context.segmentAggregator.getMetadata().getName(), TIMEOUT).join();
    Assert.assertEquals("Unexpected number of bytes flushed to Storage.", expectedData.length, storageInfo.getLength());
    Assert.assertTrue("Segment is not sealed in storage post flush.", storageInfo.isSealed());
    Assert.assertTrue("Segment is not marked in metadata as sealed in storage post flush.", context.segmentAggregator.getMetadata().isSealedInStorage());
    context.storage.read(readHandle(context.segmentAggregator.getMetadata().getName()), 0, actualData, 0, actualData.length, TIMEOUT).join();
    Assert.assertArrayEquals("Unexpected data written to storage.", expectedData, actualData);
}
Also used : Arrays(java.util.Arrays) StreamSegmentNotExistsException(io.pravega.segmentstore.contracts.StreamSegmentNotExistsException) AssertExtensions(io.pravega.test.common.AssertExtensions) RequiredArgsConstructor(lombok.RequiredArgsConstructor) Cleanup(lombok.Cleanup) Random(java.util.Random) UpdateableSegmentMetadata(io.pravega.segmentstore.server.UpdateableSegmentMetadata) SegmentProperties(io.pravega.segmentstore.contracts.SegmentProperties) SegmentHandle(io.pravega.segmentstore.storage.SegmentHandle) ByteArrayInputStream(java.io.ByteArrayInputStream) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Duration(java.time.Duration) Map(java.util.Map) Operation(io.pravega.segmentstore.server.logs.operations.Operation) StreamSegmentTruncateOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentTruncateOperation) InMemoryStorage(io.pravega.segmentstore.storage.mocks.InMemoryStorage) Collectors(java.util.stream.Collectors) ErrorInjector(io.pravega.test.common.ErrorInjector) IOUtils(org.apache.commons.io.IOUtils) ThreadPooledTestSuite(io.pravega.test.common.ThreadPooledTestSuite) BadOffsetException(io.pravega.segmentstore.contracts.BadOffsetException) TestStorage(io.pravega.segmentstore.server.TestStorage) MetadataBuilder(io.pravega.segmentstore.server.MetadataBuilder) ByteArrayOutputStream(java.io.ByteArrayOutputStream) Exceptions(io.pravega.common.Exceptions) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) HashMap(java.util.HashMap) AtomicReference(java.util.concurrent.atomic.AtomicReference) Supplier(java.util.function.Supplier) ArrayList(java.util.ArrayList) HashSet(java.util.HashSet) UpdateableContainerMetadata(io.pravega.segmentstore.server.UpdateableContainerMetadata) SegmentMetadata(io.pravega.segmentstore.server.SegmentMetadata) Timeout(org.junit.rules.Timeout) OutputStream(java.io.OutputStream) ManualTimer(io.pravega.segmentstore.server.ManualTimer) FixedByteArrayOutputStream(io.pravega.common.io.FixedByteArrayOutputStream) IntentionalException(io.pravega.test.common.IntentionalException) lombok.val(lombok.val) IOException(java.io.IOException) Test(org.junit.Test) TimeUnit(java.util.concurrent.TimeUnit) AtomicLong(java.util.concurrent.atomic.AtomicLong) AbstractMap(java.util.AbstractMap) Rule(org.junit.Rule) CachedStreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.CachedStreamSegmentAppendOperation) StorageOperation(io.pravega.segmentstore.server.logs.operations.StorageOperation) TreeMap(java.util.TreeMap) StreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentAppendOperation) DataCorruptionException(io.pravega.segmentstore.server.DataCorruptionException) Assert(org.junit.Assert) MergeTransactionOperation(io.pravega.segmentstore.server.logs.operations.MergeTransactionOperation) StreamSegmentSealOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentSealOperation) InputStream(java.io.InputStream) AtomicReference(java.util.concurrent.atomic.AtomicReference) ByteArrayOutputStream(java.io.ByteArrayOutputStream) FixedByteArrayOutputStream(io.pravega.common.io.FixedByteArrayOutputStream) Cleanup(lombok.Cleanup) IntentionalException(io.pravega.test.common.IntentionalException) StreamSegmentNotExistsException(io.pravega.segmentstore.contracts.StreamSegmentNotExistsException) BadOffsetException(io.pravega.segmentstore.contracts.BadOffsetException) IntentionalException(io.pravega.test.common.IntentionalException) IOException(java.io.IOException) DataCorruptionException(io.pravega.segmentstore.server.DataCorruptionException) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) StorageOperation(io.pravega.segmentstore.server.logs.operations.StorageOperation) SegmentProperties(io.pravega.segmentstore.contracts.SegmentProperties) Test(org.junit.Test)

Example 50 with Supplier

use of java.util.function.Supplier in project pravega by pravega.

the class SegmentAggregatorTests method testFlushAppendWithStorageErrors.

/**
 * Tests the behavior of flush() with appends and storage errors (on the write() method).
 */
@Test
public void testFlushAppendWithStorageErrors() throws Exception {
    final WriterConfig config = DEFAULT_CONFIG;
    final int appendCount = config.getFlushThresholdBytes() * 10;
    final int failSyncEvery = 2;
    final int failAsyncEvery = 3;
    @Cleanup TestContext context = new TestContext(config);
    context.storage.create(context.segmentAggregator.getMetadata().getName(), TIMEOUT).join();
    context.segmentAggregator.initialize(TIMEOUT).join();
    // Have the writes fail every few attempts with a well known exception.
    AtomicReference<IntentionalException> setException = new AtomicReference<>();
    Supplier<Exception> exceptionSupplier = () -> {
        IntentionalException ex = new IntentionalException(Long.toString(context.timer.getElapsedMillis()));
        setException.set(ex);
        return ex;
    };
    context.storage.setWriteSyncErrorInjector(new ErrorInjector<>(count -> count % failSyncEvery == 0, exceptionSupplier));
    context.storage.setWriteAsyncErrorInjector(new ErrorInjector<>(count -> count % failAsyncEvery == 0, exceptionSupplier));
    @Cleanup ByteArrayOutputStream writtenData = new ByteArrayOutputStream();
    // Part 1: flush triggered by accumulated size.
    int exceptionCount = 0;
    for (int i = 0; i < appendCount; i++) {
        // Add another operation and record its length.
        StorageOperation appendOp = generateAppendAndUpdateMetadata(i, SEGMENT_ID, context);
        context.segmentAggregator.add(appendOp);
        getAppendData(appendOp, writtenData, context);
        // Call flush() and inspect the result.
        setException.set(null);
        // Force a flush by incrementing the time by a lot.
        context.increaseTime(config.getFlushThresholdTime().toMillis() + 1);
        FlushResult flushResult = null;
        try {
            flushResult = context.segmentAggregator.flush(TIMEOUT).join();
            Assert.assertNull("An exception was expected, but none was thrown.", setException.get());
            Assert.assertNotNull("No FlushResult provided.", flushResult);
        } catch (Exception ex) {
            if (setException.get() != null) {
                Assert.assertEquals("Unexpected exception thrown.", setException.get(), Exceptions.unwrap(ex));
                exceptionCount++;
            } else {
                // Not expecting any exception this time.
                throw ex;
            }
        }
        // Check flush result.
        if (flushResult != null) {
            AssertExtensions.assertGreaterThan("Not enough bytes were flushed (time threshold).", 0, flushResult.getFlushedBytes());
            Assert.assertEquals("Not expecting any merged bytes in this test.", 0, flushResult.getMergedBytes());
        }
    }
    // Do one last flush at the end to make sure we clear out all the buffers, if there's anything else left.
    // Force a flush by incrementing the time by a lot.
    context.increaseTime(config.getFlushThresholdTime().toMillis() + 1);
    context.storage.setWriteSyncErrorInjector(null);
    context.storage.setWriteAsyncErrorInjector(null);
    context.segmentAggregator.flush(TIMEOUT).join();
    // Verify data.
    byte[] expectedData = writtenData.toByteArray();
    byte[] actualData = new byte[expectedData.length];
    long storageLength = context.storage.getStreamSegmentInfo(context.segmentAggregator.getMetadata().getName(), TIMEOUT).join().getLength();
    Assert.assertEquals("Unexpected number of bytes flushed to Storage.", expectedData.length, storageLength);
    context.storage.read(readHandle(context.segmentAggregator.getMetadata().getName()), 0, actualData, 0, actualData.length, TIMEOUT).join();
    Assert.assertArrayEquals("Unexpected data written to storage.", expectedData, actualData);
    AssertExtensions.assertGreaterThan("Not enough errors injected.", 0, exceptionCount);
}
Also used : Arrays(java.util.Arrays) StreamSegmentNotExistsException(io.pravega.segmentstore.contracts.StreamSegmentNotExistsException) AssertExtensions(io.pravega.test.common.AssertExtensions) RequiredArgsConstructor(lombok.RequiredArgsConstructor) Cleanup(lombok.Cleanup) Random(java.util.Random) UpdateableSegmentMetadata(io.pravega.segmentstore.server.UpdateableSegmentMetadata) SegmentProperties(io.pravega.segmentstore.contracts.SegmentProperties) SegmentHandle(io.pravega.segmentstore.storage.SegmentHandle) ByteArrayInputStream(java.io.ByteArrayInputStream) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Duration(java.time.Duration) Map(java.util.Map) Operation(io.pravega.segmentstore.server.logs.operations.Operation) StreamSegmentTruncateOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentTruncateOperation) InMemoryStorage(io.pravega.segmentstore.storage.mocks.InMemoryStorage) Collectors(java.util.stream.Collectors) ErrorInjector(io.pravega.test.common.ErrorInjector) IOUtils(org.apache.commons.io.IOUtils) ThreadPooledTestSuite(io.pravega.test.common.ThreadPooledTestSuite) BadOffsetException(io.pravega.segmentstore.contracts.BadOffsetException) TestStorage(io.pravega.segmentstore.server.TestStorage) MetadataBuilder(io.pravega.segmentstore.server.MetadataBuilder) ByteArrayOutputStream(java.io.ByteArrayOutputStream) Exceptions(io.pravega.common.Exceptions) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) HashMap(java.util.HashMap) AtomicReference(java.util.concurrent.atomic.AtomicReference) Supplier(java.util.function.Supplier) ArrayList(java.util.ArrayList) HashSet(java.util.HashSet) UpdateableContainerMetadata(io.pravega.segmentstore.server.UpdateableContainerMetadata) SegmentMetadata(io.pravega.segmentstore.server.SegmentMetadata) Timeout(org.junit.rules.Timeout) OutputStream(java.io.OutputStream) ManualTimer(io.pravega.segmentstore.server.ManualTimer) FixedByteArrayOutputStream(io.pravega.common.io.FixedByteArrayOutputStream) IntentionalException(io.pravega.test.common.IntentionalException) lombok.val(lombok.val) IOException(java.io.IOException) Test(org.junit.Test) TimeUnit(java.util.concurrent.TimeUnit) AtomicLong(java.util.concurrent.atomic.AtomicLong) AbstractMap(java.util.AbstractMap) Rule(org.junit.Rule) CachedStreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.CachedStreamSegmentAppendOperation) StorageOperation(io.pravega.segmentstore.server.logs.operations.StorageOperation) TreeMap(java.util.TreeMap) StreamSegmentAppendOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentAppendOperation) DataCorruptionException(io.pravega.segmentstore.server.DataCorruptionException) Assert(org.junit.Assert) MergeTransactionOperation(io.pravega.segmentstore.server.logs.operations.MergeTransactionOperation) StreamSegmentSealOperation(io.pravega.segmentstore.server.logs.operations.StreamSegmentSealOperation) InputStream(java.io.InputStream) AtomicReference(java.util.concurrent.atomic.AtomicReference) ByteArrayOutputStream(java.io.ByteArrayOutputStream) FixedByteArrayOutputStream(io.pravega.common.io.FixedByteArrayOutputStream) Cleanup(lombok.Cleanup) IntentionalException(io.pravega.test.common.IntentionalException) StreamSegmentNotExistsException(io.pravega.segmentstore.contracts.StreamSegmentNotExistsException) BadOffsetException(io.pravega.segmentstore.contracts.BadOffsetException) IntentionalException(io.pravega.test.common.IntentionalException) IOException(java.io.IOException) DataCorruptionException(io.pravega.segmentstore.server.DataCorruptionException) StorageOperation(io.pravega.segmentstore.server.logs.operations.StorageOperation) Test(org.junit.Test)

Aggregations

Supplier (java.util.function.Supplier)104 Test (org.junit.Test)43 List (java.util.List)41 ArrayList (java.util.ArrayList)28 Map (java.util.Map)24 Collectors (java.util.stream.Collectors)23 HashMap (java.util.HashMap)21 Function (java.util.function.Function)20 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)19 IOException (java.io.IOException)17 Arrays (java.util.Arrays)16 TimeUnit (java.util.concurrent.TimeUnit)14 Consumer (java.util.function.Consumer)14 Assert (org.junit.Assert)14 Collections (java.util.Collections)13 CompletableFuture (java.util.concurrent.CompletableFuture)13 Rule (org.junit.Rule)13 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)12 Cleanup (lombok.Cleanup)12 Timeout (org.junit.rules.Timeout)11