Search in sources :

Example 1 with StreamCutRecord

use of io.pravega.controller.store.stream.records.StreamCutRecord in project pravega by pravega.

the class ControllerMetadataJsonSerializerTest method testStreamCutRecord.

@Test
public void testStreamCutRecord() {
    StreamCutRecord record = new StreamCutRecord(10L, 10L, ImmutableMap.of(1L, 2L, 3L, 4L));
    testRecordSerialization(record, StreamCutRecord.class);
}
Also used : StreamCutRecord(io.pravega.controller.store.stream.records.StreamCutRecord) Test(org.junit.Test)

Example 2 with StreamCutRecord

use of io.pravega.controller.store.stream.records.StreamCutRecord in project pravega by pravega.

the class StreamMetadataTasks method getTruncationStreamCutByTimeLimit.

private CompletableFuture<Map<Long, Long>> getTruncationStreamCutByTimeLimit(String scope, String stream, OperationContext context, RetentionPolicy policy, RetentionSet retentionSet, Map<Long, Long> lowerBound) {
    long currentTime = retentionClock.get().get();
    // we get the streamcuts from retentionset that satisfy the min and max bounds with min pointing to most recent
    // streamcut to satisfy both min and max bounds while max refering to oldest such streamcut in retention set.
    // limits.key will refer to max and limit.value will refer to min.
    Map.Entry<StreamCutReferenceRecord, StreamCutReferenceRecord> limits = getBoundStreamCuts(policy, retentionSet, x -> currentTime - x.getRecordingTime());
    // if subscriber lowerbound is greater than (ahead of/after) streamcut corresponding to the max time and is less than
    // (behind/before) stream cut for min time  from the retention set then we can safely truncate at lowerbound.
    // Else we will truncate at the max time bound if it exists
    // 1. if LB is greater than (ahead of/after) min => truncate at min
    // 2. if LB is less than (behind/before) max => truncate at max
    // 3. if LB is less than (behind/before) min && LB is greater than (ahead of/after) max => truncate at LB
    // 4. if LB is less than (behind/before) min && overlaps max => truncate at max
    // 5. if LB overlaps with min and max ==> so its got both recent data and older data.
    // we will truncate at a streamcut less than (behind/before) max in this case.
    CompletableFuture<StreamCutRecord> limitMinFuture = limits.getValue() == null ? CompletableFuture.completedFuture(null) : streamMetadataStore.getStreamCutRecord(scope, stream, limits.getValue(), context, executor);
    // if lowerbound is empty simply return min
    if (lowerBound == null || lowerBound.isEmpty()) {
        return limitMinFuture.thenApply(min -> Optional.ofNullable(min).map(StreamCutRecord::getStreamCut).orElse(null));
    }
    Optional<StreamCutReferenceRecord> maxBoundRef = retentionSet.getRetentionRecords().stream().filter(x -> currentTime - x.getRecordingTime() >= policy.getRetentionMax()).max(Comparator.comparingLong(StreamCutReferenceRecord::getRecordingTime));
    CompletableFuture<StreamCutRecord> limitMaxFuture = limits.getKey() == null ? CompletableFuture.completedFuture(null) : streamMetadataStore.getStreamCutRecord(scope, stream, limits.getKey(), context, executor);
    CompletableFuture<StreamCutRecord> maxBoundFuture = maxBoundRef.map(x -> streamMetadataStore.getStreamCutRecord(scope, stream, x, context, executor)).orElse(CompletableFuture.completedFuture(null));
    return CompletableFuture.allOf(limitMaxFuture, limitMinFuture, maxBoundFuture).thenCompose(v -> {
        StreamCutRecord limitMax = limitMaxFuture.join();
        StreamCutRecord limitMin = limitMinFuture.join();
        StreamCutRecord maxBound = maxBoundFuture.join();
        if (limitMin != null) {
            return streamMetadataStore.compareStreamCut(scope, stream, limitMin.getStreamCut(), lowerBound, context, executor).thenCompose(compareWithMin -> {
                switch(compareWithMin) {
                    case EqualOrAfter:
                        // if it overlaps with limitmax, then we truncate at maxbound
                        return truncateAtLowerBoundOrMax(scope, stream, context, lowerBound, limitMax, maxBound);
                    case Overlaps:
                        // and we are choosing from retention set.
                        return getStreamcutBeforeLowerbound(scope, stream, context, retentionSet, lowerBound);
                    case Before:
                        // min is less than (behind/before) lb. truncate at min
                        return CompletableFuture.completedFuture(limitMin.getStreamCut());
                    default:
                        throw new IllegalArgumentException("Invalid Compare streamcut response");
                }
            });
        } else {
            return CompletableFuture.completedFuture(null);
        }
    });
}
Also used : UpdateStreamEvent(io.pravega.shared.controller.event.UpdateStreamEvent) StreamCut(io.pravega.controller.stream.api.grpc.v1.Controller.StreamCut) EventStreamWriter(io.pravega.client.stream.EventStreamWriter) StreamConfiguration(io.pravega.client.stream.StreamConfiguration) AbstractStreamMetadataStore(io.pravega.controller.store.stream.AbstractStreamMetadataStore) StoreException(io.pravega.controller.store.stream.StoreException) TaskMetadataStore(io.pravega.controller.store.task.TaskMetadataStore) Duration(java.time.Duration) Map(java.util.Map) RGStreamCutRecord(io.pravega.shared.controller.event.RGStreamCutRecord) LockFailedException(io.pravega.controller.store.task.LockFailedException) ReaderGroupConfig(io.pravega.client.stream.ReaderGroupConfig) DeleteScopeStatus(io.pravega.controller.stream.api.grpc.v1.Controller.DeleteScopeStatus) StreamCutReferenceRecord(io.pravega.controller.store.stream.records.StreamCutReferenceRecord) StreamTruncationRecord(io.pravega.controller.store.stream.records.StreamTruncationRecord) DeleteStreamEvent(io.pravega.shared.controller.event.DeleteStreamEvent) Set(java.util.Set) GuardedBy(javax.annotation.concurrent.GuardedBy) ControllerEvent(io.pravega.shared.controller.event.ControllerEvent) Serializable(java.io.Serializable) ReaderGroupState(io.pravega.controller.store.stream.ReaderGroupState) StreamMetadataStore(io.pravega.controller.store.stream.StreamMetadataStore) Futures(io.pravega.common.concurrent.Futures) GrpcAuthHelper(io.pravega.controller.server.security.auth.GrpcAuthHelper) StreamMetrics(io.pravega.controller.metrics.StreamMetrics) CreateReaderGroupResponse(io.pravega.controller.stream.api.grpc.v1.Controller.CreateReaderGroupResponse) TransactionMetrics(io.pravega.controller.metrics.TransactionMetrics) RetentionPolicy(io.pravega.client.stream.RetentionPolicy) Exceptions(io.pravega.common.Exceptions) TruncateStreamEvent(io.pravega.shared.controller.event.TruncateStreamEvent) RetentionSet(io.pravega.controller.store.stream.records.RetentionSet) Supplier(java.util.function.Supplier) UpdateSubscriberStatus(io.pravega.controller.stream.api.grpc.v1.Controller.UpdateSubscriberStatus) ArrayList(java.util.ArrayList) ReaderGroupConfigRecord(io.pravega.controller.store.stream.records.ReaderGroupConfigRecord) ReaderGroupConfiguration(io.pravega.controller.stream.api.grpc.v1.Controller.ReaderGroupConfiguration) StreamInfo(io.pravega.controller.stream.api.grpc.v1.Controller.StreamInfo) ScheduledExecutorService(java.util.concurrent.ScheduledExecutorService) EventHelper(io.pravega.controller.task.EventHelper) CreateReaderGroupEvent(io.pravega.shared.controller.event.CreateReaderGroupEvent) RetryHelper(io.pravega.controller.util.RetryHelper) Task(io.pravega.controller.task.Task) Executor(java.util.concurrent.Executor) CreateStreamResponse(io.pravega.controller.store.stream.CreateStreamResponse) WireCommands(io.pravega.shared.protocol.netty.WireCommands) SegmentRange(io.pravega.controller.stream.api.grpc.v1.Controller.SegmentRange) AtomicLong(java.util.concurrent.atomic.AtomicLong) StreamConfigurationRecord(io.pravega.controller.store.stream.records.StreamConfigurationRecord) TreeMap(java.util.TreeMap) WireCommandFailedException(io.pravega.controller.server.WireCommandFailedException) Preconditions(com.google.common.base.Preconditions) DeleteReaderGroupStatus(io.pravega.controller.stream.api.grpc.v1.Controller.DeleteReaderGroupStatus) ScalingPolicy(io.pravega.client.stream.ScalingPolicy) ControllerEventProcessors(io.pravega.controller.server.eventProcessor.ControllerEventProcessors) StreamSegmentRecord(io.pravega.controller.store.stream.records.StreamSegmentRecord) DeleteReaderGroupEvent(io.pravega.shared.controller.event.DeleteReaderGroupEvent) LoggerFactory(org.slf4j.LoggerFactory) TimeoutException(java.util.concurrent.TimeoutException) SealStreamEvent(io.pravega.shared.controller.event.SealStreamEvent) TagLogger(io.pravega.common.tracing.TagLogger) VersionedMetadata(io.pravega.controller.store.VersionedMetadata) TaskStepsRetryHelper.withRetries(io.pravega.controller.task.Stream.TaskStepsRetryHelper.withRetries) Stream(io.pravega.client.stream.Stream) SubscribersResponse(io.pravega.controller.stream.api.grpc.v1.Controller.SubscribersResponse) Controller(io.pravega.controller.stream.api.grpc.v1.Controller) EpochTransitionRecord(io.pravega.controller.store.stream.records.EpochTransitionRecord) CreateStreamStatus(io.pravega.controller.stream.api.grpc.v1.Controller.CreateStreamStatus) ImmutableSet(com.google.common.collect.ImmutableSet) ImmutableMap(com.google.common.collect.ImmutableMap) EpochTransitionOperationExceptions(io.pravega.controller.store.stream.EpochTransitionOperationExceptions) DeleteScopeEvent(io.pravega.shared.controller.event.DeleteScopeEvent) CompletionException(java.util.concurrent.CompletionException) UUID(java.util.UUID) Collectors(java.util.stream.Collectors) RetriesExhaustedException(io.pravega.common.util.RetriesExhaustedException) List(java.util.List) Config(io.pravega.controller.util.Config) RetryableException(io.pravega.controller.retryable.RetryableException) Optional(java.util.Optional) Resource(io.pravega.controller.store.task.Resource) IntStream(java.util.stream.IntStream) OperationContext(io.pravega.controller.store.stream.OperationContext) SegmentHelper(io.pravega.controller.server.SegmentHelper) ModelHelper(io.pravega.client.control.impl.ModelHelper) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) ScaleStatusResponse(io.pravega.controller.stream.api.grpc.v1.Controller.ScaleStatusResponse) HashMap(java.util.HashMap) CompletableFuture(java.util.concurrent.CompletableFuture) AtomicReference(java.util.concurrent.atomic.AtomicReference) Function(java.util.function.Function) BucketStore(io.pravega.controller.store.stream.BucketStore) DeleteStreamStatus(io.pravega.controller.stream.api.grpc.v1.Controller.DeleteStreamStatus) EventStreamClientFactory(io.pravega.client.EventStreamClientFactory) NameUtils.getQualifiedStreamSegmentName(io.pravega.shared.NameUtils.getQualifiedStreamSegmentName) ScaleResponse(io.pravega.controller.stream.api.grpc.v1.Controller.ScaleResponse) EventWriterConfig(io.pravega.client.stream.EventWriterConfig) ControllerService(io.pravega.controller.server.ControllerService) UpdateReaderGroupResponse(io.pravega.controller.stream.api.grpc.v1.Controller.UpdateReaderGroupResponse) NameUtils(io.pravega.shared.NameUtils) Iterator(java.util.Iterator) TaskBase(io.pravega.controller.task.TaskBase) StreamCutRecord(io.pravega.controller.store.stream.records.StreamCutRecord) Timer(io.pravega.common.Timer) AbstractMap(java.util.AbstractMap) EpochRecord(io.pravega.controller.store.stream.records.EpochRecord) UpdateStreamStatus(io.pravega.controller.stream.api.grpc.v1.Controller.UpdateStreamStatus) State(io.pravega.controller.store.stream.State) VisibleForTesting(com.google.common.annotations.VisibleForTesting) UpdateReaderGroupEvent(io.pravega.shared.controller.event.UpdateReaderGroupEvent) Comparator(java.util.Comparator) ReaderGroupConfigResponse(io.pravega.controller.stream.api.grpc.v1.Controller.ReaderGroupConfigResponse) ScaleOpEvent(io.pravega.shared.controller.event.ScaleOpEvent) StreamCutReferenceRecord(io.pravega.controller.store.stream.records.StreamCutReferenceRecord) RGStreamCutRecord(io.pravega.shared.controller.event.RGStreamCutRecord) StreamCutRecord(io.pravega.controller.store.stream.records.StreamCutRecord) Map(java.util.Map) TreeMap(java.util.TreeMap) ImmutableMap(com.google.common.collect.ImmutableMap) HashMap(java.util.HashMap) AbstractMap(java.util.AbstractMap)

Example 3 with StreamCutRecord

use of io.pravega.controller.store.stream.records.StreamCutRecord in project pravega by pravega.

the class StreamMetadataTasksTest method consumptionBasedRetentionTimeLimitWithOverlappingMinTest.

@Test(timeout = 30000)
public void consumptionBasedRetentionTimeLimitWithOverlappingMinTest() throws Exception {
    final ScalingPolicy policy = ScalingPolicy.fixed(2);
    final RetentionPolicy retentionPolicy = RetentionPolicy.byTime(Duration.ofMillis(10), Duration.ofMillis(50));
    String stream1 = "consumptionSizeOverlap";
    final StreamConfiguration configuration = StreamConfiguration.builder().scalingPolicy(policy).retentionPolicy(retentionPolicy).build();
    streamStorePartialMock.createStream(SCOPE, stream1, configuration, System.currentTimeMillis(), null, executor).get();
    streamStorePartialMock.setState(SCOPE, stream1, State.ACTIVE, null, executor).get();
    doReturn(CompletableFuture.completedFuture(Controller.CreateStreamStatus.Status.SUCCESS)).when(streamMetadataTasks).createRGStream(anyString(), anyString(), any(), anyLong(), anyInt(), anyLong());
    WriterMock requestEventWriter = new WriterMock(streamMetadataTasks, executor);
    streamMetadataTasks.setRequestEventWriter(requestEventWriter);
    streamMetadataTasks.setRetentionFrequencyMillis(1L);
    AtomicLong time = new AtomicLong(0L);
    streamMetadataTasks.setRetentionClock(time::get);
    final Segment seg0 = new Segment(SCOPE, stream1, 0L);
    final Segment seg1 = new Segment(SCOPE, stream1, 1L);
    ImmutableMap<Segment, Long> startStreamCut = ImmutableMap.of(seg0, 0L, seg1, 0L);
    Map<Stream, StreamCut> startSC = ImmutableMap.of(Stream.of(SCOPE, stream1), new StreamCutImpl(Stream.of(SCOPE, stream1), startStreamCut));
    ImmutableMap<Segment, Long> endStreamCut = ImmutableMap.of(seg0, 2000L, seg1, 3000L);
    Map<Stream, StreamCut> endSC = ImmutableMap.of(Stream.of(SCOPE, stream1), new StreamCutImpl(Stream.of(SCOPE, stream1), endStreamCut));
    ReaderGroupConfig consumpRGConfig = ReaderGroupConfig.builder().automaticCheckpointIntervalMillis(30000L).groupRefreshTimeMillis(20000L).maxOutstandingCheckpointRequest(2).retentionType(ReaderGroupConfig.StreamDataRetention.AUTOMATIC_RELEASE_AT_LAST_CHECKPOINT).startingStreamCuts(startSC).endingStreamCuts(endSC).build();
    consumpRGConfig = ReaderGroupConfig.cloneConfig(consumpRGConfig, UUID.randomUUID(), 0L);
    doReturn(CompletableFuture.completedFuture(Controller.CreateStreamStatus.Status.SUCCESS)).when(streamMetadataTasks).createRGStream(anyString(), anyString(), any(), anyLong(), anyInt(), anyLong());
    String subscriber1 = "subscriber1";
    CompletableFuture<Controller.CreateReaderGroupResponse> createStatus = streamMetadataTasks.createReaderGroup(SCOPE, subscriber1, consumpRGConfig, System.currentTimeMillis(), 0L);
    assertTrue(Futures.await(processEvent(requestEventWriter)));
    assertEquals(Controller.CreateReaderGroupResponse.Status.SUCCESS, createStatus.join().getStatus());
    // create a retention set that has 5 values
    // s0: 10: seg0/1, seg1/5 ==> time retained if truncated at = 10 <= min
    // s1: 20: seg0/1, seg1/6 ==> time retained if truncated at = 0
    time.set(10L);
    streamStorePartialMock.addStreamCutToRetentionSet(SCOPE, stream1, new StreamCutRecord(time.get(), 5L, ImmutableMap.of(0L, 1L, 1L, 5L)), null, executor).join();
    time.set(20L);
    streamStorePartialMock.addStreamCutToRetentionSet(SCOPE, stream1, new StreamCutRecord(time.get(), 6L, ImmutableMap.of(0L, 1L, 1L, 6L)), null, executor).join();
    // subscriber streamcut : 0/0, 1/10
    final String subscriber1Name = NameUtils.getScopedReaderGroupName(SCOPE, subscriber1);
    streamMetadataTasks.updateSubscriberStreamCut(SCOPE, stream1, subscriber1Name, consumpRGConfig.getReaderGroupId().toString(), 0L, ImmutableMap.of(0L, 0L, 1L, 10L), 0L).join();
    // overlap with min, no clear max. no truncation.
    streamMetadataTasks.retention(SCOPE, stream1, retentionPolicy, time.get(), null, "").join();
    VersionedMetadata<StreamTruncationRecord> truncationRecord = streamStorePartialMock.getTruncationRecord(SCOPE, stream1, null, executor).join();
    assertFalse(truncationRecord.getObject().isUpdating());
    // s0: 10: seg0/1, seg1/5 ==> time retained if truncated at = 40 <== max
    // s1: 20: seg0/1, seg1/6 ==> time retained if truncated at = 30
    // s2: 30: seg0/10, seg1/7 ==> time retained if truncated at = 20
    // s3: 40: seg0/10, seg1/8 ==> time retained if truncated at = 10  <== min
    // s4: 50: seg0/10, seg1/10
    time.set(30L);
    streamStorePartialMock.addStreamCutToRetentionSet(SCOPE, stream1, new StreamCutRecord(time.get(), 17L, ImmutableMap.of(0L, 10L, 1L, 7L)), null, executor).join();
    time.set(40L);
    streamStorePartialMock.addStreamCutToRetentionSet(SCOPE, stream1, new StreamCutRecord(time.get(), 18L, ImmutableMap.of(0L, 10L, 1L, 8L)), null, executor).join();
    time.set(50L);
    streamStorePartialMock.addStreamCutToRetentionSet(SCOPE, stream1, new StreamCutRecord(time.get(), 20L, ImmutableMap.of(0L, 10L, 1L, 10L)), null, executor).join();
    // subscriber streamcut: slb: seg0/9, seg1/10 ==> overlaps with min bound streamcut.
    // so we should actually truncate at streamcut before slb.
    streamMetadataTasks.updateSubscriberStreamCut(SCOPE, stream1, subscriber1Name, consumpRGConfig.getReaderGroupId().toString(), 0L, ImmutableMap.of(0L, 9L, 1L, 10L), 0L).join();
    // this should truncate as s1. first streamcut before slb.
    streamMetadataTasks.retention(SCOPE, stream1, retentionPolicy, time.get(), null, "").join();
    truncationRecord = streamStorePartialMock.getTruncationRecord(SCOPE, stream1, null, executor).join();
    assertEquals(truncationRecord.getObject().getStreamCut().get(0L).longValue(), 1L);
    assertEquals(truncationRecord.getObject().getStreamCut().get(1L).longValue(), 6L);
    assertTrue(truncationRecord.getObject().isUpdating());
    streamStorePartialMock.completeTruncation(SCOPE, stream1, truncationRecord, null, executor).join();
}
Also used : ReaderGroupConfig(io.pravega.client.stream.ReaderGroupConfig) ScalingPolicy(io.pravega.client.stream.ScalingPolicy) StreamCut(io.pravega.client.stream.StreamCut) StreamTruncationRecord(io.pravega.controller.store.stream.records.StreamTruncationRecord) StreamCutImpl(io.pravega.client.stream.impl.StreamCutImpl) ArgumentMatchers.anyString(org.mockito.ArgumentMatchers.anyString) RetentionPolicy(io.pravega.client.stream.RetentionPolicy) Segment(io.pravega.client.segment.impl.Segment) AtomicLong(java.util.concurrent.atomic.AtomicLong) StreamConfiguration(io.pravega.client.stream.StreamConfiguration) AtomicLong(java.util.concurrent.atomic.AtomicLong) ArgumentMatchers.anyLong(org.mockito.ArgumentMatchers.anyLong) Stream(io.pravega.client.stream.Stream) ControllerEventStreamWriterMock(io.pravega.controller.mocks.ControllerEventStreamWriterMock) EventStreamWriterMock(io.pravega.controller.mocks.EventStreamWriterMock) RGStreamCutRecord(io.pravega.shared.controller.event.RGStreamCutRecord) StreamCutRecord(io.pravega.controller.store.stream.records.StreamCutRecord) Test(org.junit.Test)

Example 4 with StreamCutRecord

use of io.pravega.controller.store.stream.records.StreamCutRecord in project pravega by pravega.

the class StreamMetadataTasksTest method consumptionBasedRetentionSizeLimitTest.

@Test(timeout = 30000)
public void consumptionBasedRetentionSizeLimitTest() throws Exception {
    final ScalingPolicy policy = ScalingPolicy.fixed(2);
    final RetentionPolicy retentionPolicy = RetentionPolicy.bySizeBytes(2L, 10L);
    String stream1 = "consumptionSize";
    final StreamConfiguration configuration = StreamConfiguration.builder().scalingPolicy(policy).retentionPolicy(retentionPolicy).build();
    streamStorePartialMock.createStream(SCOPE, stream1, configuration, System.currentTimeMillis(), null, executor).get();
    streamStorePartialMock.setState(SCOPE, stream1, State.ACTIVE, null, executor).get();
    final Segment seg0 = new Segment(SCOPE, stream1, 0L);
    final Segment seg1 = new Segment(SCOPE, stream1, 1L);
    ImmutableMap<Segment, Long> startStreamCut = ImmutableMap.of(seg0, 0L, seg1, 0L);
    Map<Stream, StreamCut> startSC = ImmutableMap.of(Stream.of(SCOPE, stream1), new StreamCutImpl(Stream.of(SCOPE, stream1), startStreamCut));
    ImmutableMap<Segment, Long> endStreamCut = ImmutableMap.of(seg0, 2000L, seg1, 3000L);
    Map<Stream, StreamCut> endSC = ImmutableMap.of(Stream.of(SCOPE, stream1), new StreamCutImpl(Stream.of(SCOPE, stream1), endStreamCut));
    ReaderGroupConfig consumpRGConfig = ReaderGroupConfig.builder().automaticCheckpointIntervalMillis(30000L).groupRefreshTimeMillis(20000L).maxOutstandingCheckpointRequest(2).retentionType(ReaderGroupConfig.StreamDataRetention.AUTOMATIC_RELEASE_AT_LAST_CHECKPOINT).startingStreamCuts(startSC).endingStreamCuts(endSC).build();
    consumpRGConfig = ReaderGroupConfig.cloneConfig(consumpRGConfig, UUID.randomUUID(), 0L);
    assertNotEquals(0, consumer.getCurrentSegments(SCOPE, stream1, 0L).get().size());
    doReturn(CompletableFuture.completedFuture(Controller.CreateStreamStatus.Status.SUCCESS)).when(streamMetadataTasks).createRGStream(anyString(), anyString(), any(), anyLong(), anyInt(), anyLong());
    WriterMock requestEventWriter = new WriterMock(streamMetadataTasks, executor);
    streamMetadataTasks.setRequestEventWriter(requestEventWriter);
    streamMetadataTasks.setRetentionFrequencyMillis(1L);
    // region case 1: basic retention
    String subscriber1 = "subscriber1";
    CompletableFuture<Controller.CreateReaderGroupResponse> createStatus = streamMetadataTasks.createReaderGroup(SCOPE, subscriber1, consumpRGConfig, System.currentTimeMillis(), 0L);
    assertTrue(Futures.await(processEvent(requestEventWriter)));
    assertEquals(Controller.CreateReaderGroupResponse.Status.SUCCESS, createStatus.join().getStatus());
    String subscriber2 = "subscriber2";
    createStatus = streamMetadataTasks.createReaderGroup(SCOPE, subscriber2, consumpRGConfig, System.currentTimeMillis(), 0L);
    assertTrue(Futures.await(processEvent(requestEventWriter)));
    assertEquals(Controller.CreateReaderGroupResponse.Status.SUCCESS, createStatus.join().getStatus());
    final String subscriber1Name = NameUtils.getScopedReaderGroupName(SCOPE, subscriber1);
    streamMetadataTasks.updateSubscriberStreamCut(SCOPE, stream1, subscriber1Name, consumpRGConfig.getReaderGroupId().toString(), 0L, ImmutableMap.of(0L, 2L, 1L, 1L), 0L).join();
    final String subscriber2Name = NameUtils.getScopedReaderGroupName(SCOPE, subscriber2);
    streamMetadataTasks.updateSubscriberStreamCut(SCOPE, stream1, subscriber2Name, consumpRGConfig.getReaderGroupId().toString(), 0L, ImmutableMap.of(0L, 1L, 1L, 2L), 0L).join();
    Map<Long, Long> map1 = new HashMap<>();
    map1.put(0L, 2L);
    map1.put(1L, 2L);
    long size = streamStorePartialMock.getSizeTillStreamCut(SCOPE, stream1, map1, Optional.empty(), null, executor).join();
    doReturn(CompletableFuture.completedFuture(new StreamCutRecord(1L, size, ImmutableMap.copyOf(map1)))).when(streamMetadataTasks).generateStreamCut(anyString(), anyString(), any(), any(), any());
    // call retention and verify that retention policy applies
    streamMetadataTasks.retention(SCOPE, stream1, retentionPolicy, 1L, null, "").join();
    // now retention set has one stream cut 0/2, 1/2
    // subscriber lowerbound is 0/1, 1/1.. trucation should happen at lowerbound
    VersionedMetadata<StreamTruncationRecord> truncationRecord = streamStorePartialMock.getTruncationRecord(SCOPE, stream1, null, executor).join();
    assertEquals(truncationRecord.getObject().getStreamCut().get(0L).longValue(), 1L);
    assertEquals(truncationRecord.getObject().getStreamCut().get(1L).longValue(), 1L);
    assertTrue(truncationRecord.getObject().isUpdating());
    streamStorePartialMock.completeTruncation(SCOPE, stream1, truncationRecord, null, executor).join();
    // endregion
    // region case 2 min policy check
    // we will update the new streamcut to 0/10, 1/10
    map1.put(0L, 2L);
    map1.put(1L, 2L);
    size = streamStorePartialMock.getSizeTillStreamCut(SCOPE, stream1, map1, Optional.empty(), null, executor).join();
    doReturn(CompletableFuture.completedFuture(new StreamCutRecord(20L, size, ImmutableMap.copyOf(map1)))).when(streamMetadataTasks).generateStreamCut(anyString(), anyString(), any(), any(), any());
    // update both readers to make sure they have read till the latest position. we have set the min limit to 2.
    streamMetadataTasks.updateSubscriberStreamCut(SCOPE, stream1, subscriber1Name, consumpRGConfig.getReaderGroupId().toString(), 0L, ImmutableMap.of(0L, 2L, 1L, 2L), 0L).join();
    streamMetadataTasks.updateSubscriberStreamCut(SCOPE, stream1, subscriber2Name, consumpRGConfig.getReaderGroupId().toString(), 0L, ImmutableMap.of(0L, 2L, 1L, 2L), 0L).join();
    // no new truncation should happen.
    // verify that truncation record has not changed.
    streamMetadataTasks.retention(SCOPE, stream1, retentionPolicy, 20L, null, "").join();
    // now retention set has two stream cut 0/2, 1/2...0/2, 1/2
    // subscriber lowerbound is 0/2, 1/2.. does not meet min bound criteria. we also do not have a max that satisfies the limit. no truncation should happen.
    // no change:
    truncationRecord = streamStorePartialMock.getTruncationRecord(SCOPE, stream1, null, executor).join();
    assertEquals(truncationRecord.getObject().getStreamCut().get(0L).longValue(), 1L);
    assertEquals(truncationRecord.getObject().getStreamCut().get(1L).longValue(), 1L);
    assertFalse(truncationRecord.getObject().isUpdating());
    // endregion
    // region case 3: min criteria not met on lower bound. truncate at min.
    map1.put(0L, 10L);
    map1.put(1L, 10L);
    size = streamStorePartialMock.getSizeTillStreamCut(SCOPE, stream1, map1, Optional.empty(), null, executor).join();
    doReturn(CompletableFuture.completedFuture(new StreamCutRecord(30L, size, ImmutableMap.copyOf(map1)))).when(streamMetadataTasks).generateStreamCut(anyString(), anyString(), any(), any(), any());
    // update both readers to make sure they have read till the latest position - 1. we have set the min limit to 2.
    streamMetadataTasks.updateSubscriberStreamCut(SCOPE, stream1, subscriber1Name, consumpRGConfig.getReaderGroupId().toString(), 0L, ImmutableMap.of(0L, 10L, 1L, 9L), 0L).join();
    streamMetadataTasks.updateSubscriberStreamCut(SCOPE, stream1, subscriber2Name, consumpRGConfig.getReaderGroupId().toString(), 0L, ImmutableMap.of(0L, 10L, 1L, 9L), 0L).join();
    streamMetadataTasks.retention(SCOPE, stream1, retentionPolicy, 30L, null, "").join();
    // now retention set has three stream cut 0/2, 1/2...0/2, 1/2... 0/10, 1/10
    // subscriber lowerbound is 0/10, 1/9.. does not meet min bound criteria. but we have min bound on truncation record
    // truncation should happen at 0/2, 1/2
    truncationRecord = streamStorePartialMock.getTruncationRecord(SCOPE, stream1, null, executor).join();
    assertEquals(truncationRecord.getObject().getStreamCut().get(0L).longValue(), 2L);
    assertEquals(truncationRecord.getObject().getStreamCut().get(1L).longValue(), 2L);
    assertTrue(truncationRecord.getObject().isUpdating());
    streamStorePartialMock.completeTruncation(SCOPE, stream1, truncationRecord, null, executor).join();
    // endregion
    // region case 4: lowerbound behind max
    // now move the stream further ahead so that max truncation limit is crossed but lowerbound is behind max.
    map1.put(0L, 20L);
    map1.put(1L, 20L);
    size = streamStorePartialMock.getSizeTillStreamCut(SCOPE, stream1, map1, Optional.empty(), null, executor).join();
    doReturn(CompletableFuture.completedFuture(new StreamCutRecord(40L, size, ImmutableMap.copyOf(map1)))).when(streamMetadataTasks).generateStreamCut(anyString(), anyString(), any(), any(), any());
    streamMetadataTasks.retention(SCOPE, stream1, retentionPolicy, 40L, null, "").join();
    // now retention set has four stream cut 0/2, 1/2...0/2, 1/2... 0/10, 1/10.. 0/20, 1/20
    // subscriber lowerbound is 0/10, 1/9.. meets min bound criteria. but goes beyond the max criteria.
    // no streamcut can be chosen from the available stream cuts in retention set without breaking either min or max criteria.
    // in this case max will be chosen as min with 0/10, 1/10.. this will be compared with subscriber lowerbound and whichever
    // purges more data will be chosen.
    // so truncation should happen at 0/10, 1/10
    truncationRecord = streamStorePartialMock.getTruncationRecord(SCOPE, stream1, null, executor).join();
    assertEquals(truncationRecord.getObject().getStreamCut().get(0L).longValue(), 10L);
    assertEquals(truncationRecord.getObject().getStreamCut().get(1L).longValue(), 10L);
    assertTrue(truncationRecord.getObject().isUpdating());
    streamStorePartialMock.completeTruncation(SCOPE, stream1, truncationRecord, null, executor).join();
    // endregion
    // region case 5: lowerbound overlaps is beyond max but there is no clear max streamcut available in retention set
    map1.put(0L, 30L);
    map1.put(1L, 30L);
    size = streamStorePartialMock.getSizeTillStreamCut(SCOPE, stream1, map1, Optional.empty(), null, executor).join();
    doReturn(CompletableFuture.completedFuture(new StreamCutRecord(50L, size, ImmutableMap.copyOf(map1)))).when(streamMetadataTasks).generateStreamCut(anyString(), anyString(), any(), any(), any());
    // update both readers to make sure they have read till the latest position - 1. we have set the min limit to 2.
    streamMetadataTasks.updateSubscriberStreamCut(SCOPE, stream1, subscriber1Name, consumpRGConfig.getReaderGroupId().toString(), 0L, ImmutableMap.of(0L, 21L, 1L, 19L), 0L).join();
    streamMetadataTasks.updateSubscriberStreamCut(SCOPE, stream1, subscriber2Name, consumpRGConfig.getReaderGroupId().toString(), 0L, ImmutableMap.of(0L, 21L, 1L, 19L), 0L).join();
    streamMetadataTasks.retention(SCOPE, stream1, retentionPolicy, 50L, null, "").join();
    // now retention set has five stream cut 0/2, 1/2...0/2, 1/2... 0/10, 1/10.. 0/20, 1/20.. 0/30, 1/30
    // subscriber lowerbound is 0/21, 1/19.. meets min bound criteria. and its also greater than max bound.
    // but max bound streamcut cannot be chosen from retention set. same as previous case..
    // but this time we have a min bound and max bound as 0/20, 1/20.
    // truncation should happen at lowerbound as data retained is identical for lowerbound and streamcut from retentionset.
    truncationRecord = streamStorePartialMock.getTruncationRecord(SCOPE, stream1, null, executor).join();
    assertEquals(truncationRecord.getObject().getStreamCut().get(0L).longValue(), 21L);
    assertEquals(truncationRecord.getObject().getStreamCut().get(1L).longValue(), 19L);
    assertTrue(truncationRecord.getObject().isUpdating());
    streamStorePartialMock.completeTruncation(SCOPE, stream1, truncationRecord, null, executor).join();
// endregion
}
Also used : ReaderGroupConfig(io.pravega.client.stream.ReaderGroupConfig) ScalingPolicy(io.pravega.client.stream.ScalingPolicy) StreamCut(io.pravega.client.stream.StreamCut) StreamTruncationRecord(io.pravega.controller.store.stream.records.StreamTruncationRecord) StreamCutImpl(io.pravega.client.stream.impl.StreamCutImpl) HashMap(java.util.HashMap) ArgumentMatchers.anyString(org.mockito.ArgumentMatchers.anyString) RetentionPolicy(io.pravega.client.stream.RetentionPolicy) Segment(io.pravega.client.segment.impl.Segment) StreamConfiguration(io.pravega.client.stream.StreamConfiguration) AtomicLong(java.util.concurrent.atomic.AtomicLong) ArgumentMatchers.anyLong(org.mockito.ArgumentMatchers.anyLong) Stream(io.pravega.client.stream.Stream) ControllerEventStreamWriterMock(io.pravega.controller.mocks.ControllerEventStreamWriterMock) EventStreamWriterMock(io.pravega.controller.mocks.EventStreamWriterMock) RGStreamCutRecord(io.pravega.shared.controller.event.RGStreamCutRecord) StreamCutRecord(io.pravega.controller.store.stream.records.StreamCutRecord) Test(org.junit.Test)

Example 5 with StreamCutRecord

use of io.pravega.controller.store.stream.records.StreamCutRecord in project pravega by pravega.

the class StreamMetadataTasksTest method consumptionBasedRetentionWithScale.

@Test(timeout = 30000)
public void consumptionBasedRetentionWithScale() throws Exception {
    final ScalingPolicy policy = ScalingPolicy.fixed(3);
    final RetentionPolicy retentionPolicy = RetentionPolicy.bySizeBytes(0L, 1000L);
    String stream1 = "consumptionSize";
    StreamConfiguration configuration = StreamConfiguration.builder().scalingPolicy(policy).retentionPolicy(retentionPolicy).build();
    streamStorePartialMock.createStream(SCOPE, stream1, configuration, System.currentTimeMillis(), null, executor).get();
    streamStorePartialMock.setState(SCOPE, stream1, State.ACTIVE, null, executor).get();
    configuration = StreamConfiguration.builder().scalingPolicy(ScalingPolicy.fixed(1)).retentionPolicy(retentionPolicy).build();
    streamStorePartialMock.startUpdateConfiguration(SCOPE, stream1, configuration, null, executor).join();
    VersionedMetadata<StreamConfigurationRecord> configRecord = streamStorePartialMock.getConfigurationRecord(SCOPE, stream1, null, executor).join();
    streamStorePartialMock.completeUpdateConfiguration(SCOPE, stream1, configRecord, null, executor).join();
    final Segment seg0 = new Segment(SCOPE, stream1, 0L);
    final Segment seg1 = new Segment(SCOPE, stream1, 1L);
    ImmutableMap<Segment, Long> startStreamCut = ImmutableMap.of(seg0, 0L, seg1, 0L);
    Map<Stream, StreamCut> startSC = ImmutableMap.of(Stream.of(SCOPE, stream1), new StreamCutImpl(Stream.of(SCOPE, stream1), startStreamCut));
    ImmutableMap<Segment, Long> endStreamCut = ImmutableMap.of(seg0, 2000L, seg1, 3000L);
    Map<Stream, StreamCut> endSC = ImmutableMap.of(Stream.of(SCOPE, stream1), new StreamCutImpl(Stream.of(SCOPE, stream1), endStreamCut));
    ReaderGroupConfig consumpRGConfig = ReaderGroupConfig.builder().automaticCheckpointIntervalMillis(30000L).groupRefreshTimeMillis(20000L).maxOutstandingCheckpointRequest(2).retentionType(ReaderGroupConfig.StreamDataRetention.AUTOMATIC_RELEASE_AT_LAST_CHECKPOINT).startingStreamCuts(startSC).endingStreamCuts(endSC).build();
    consumpRGConfig = ReaderGroupConfig.cloneConfig(consumpRGConfig, UUID.randomUUID(), 0L);
    doReturn(CompletableFuture.completedFuture(Controller.CreateStreamStatus.Status.SUCCESS)).when(streamMetadataTasks).createRGStream(anyString(), anyString(), any(), anyLong(), anyInt(), anyLong());
    WriterMock requestEventWriter = new WriterMock(streamMetadataTasks, executor);
    streamMetadataTasks.setRequestEventWriter(requestEventWriter);
    String subscriber1 = "subscriber1";
    CompletableFuture<Controller.CreateReaderGroupResponse> createStatus = streamMetadataTasks.createReaderGroup(SCOPE, subscriber1, consumpRGConfig, System.currentTimeMillis(), 0L);
    assertTrue(Futures.await(processEvent(requestEventWriter)));
    Controller.CreateReaderGroupResponse createResponse1 = createStatus.join();
    assertEquals(Controller.CreateReaderGroupResponse.Status.SUCCESS, createResponse1.getStatus());
    assertEquals(0L, createResponse1.getConfig().getGeneration());
    assertFalse(ReaderGroupConfig.DEFAULT_UUID.toString().equals(createResponse1.getConfig().getReaderGroupId()));
    String subscriber2 = "subscriber2";
    createStatus = streamMetadataTasks.createReaderGroup(SCOPE, subscriber2, consumpRGConfig, System.currentTimeMillis(), 0L);
    assertTrue(Futures.await(processEvent(requestEventWriter)));
    Controller.CreateReaderGroupResponse createResponse2 = createStatus.join();
    assertEquals(Controller.CreateReaderGroupResponse.Status.SUCCESS, createResponse2.getStatus());
    assertEquals(0L, createResponse2.getConfig().getGeneration());
    assertFalse(ReaderGroupConfig.DEFAULT_UUID.toString().equals(createResponse2.getConfig().getReaderGroupId()));
    final String subscriber1Name = NameUtils.getScopedReaderGroupName(SCOPE, subscriber1);
    final String subscriber2Name = NameUtils.getScopedReaderGroupName(SCOPE, subscriber2);
    // example::
    // | s0 | s3      |
    // |    | s4 |    | s6
    // | s1      | s5 |
    // | s2      |    |
    // valid stream cuts: { s0/off, s5/-1 }, { s0/off, s2/off, s5/-1 }
    // lower bound = { s0/off, s2/off, s5/-1 }
    // valid stream cuts: { s0/off, s5/-1 }, { s0/off, s2/off, s5/-1 }, { s0/off, s1/off, s2/off }
    // lower bound = { s0/off, s1/off, s2/off }
    long three = NameUtils.computeSegmentId(3, 1);
    long four = NameUtils.computeSegmentId(4, 1);
    long five = NameUtils.computeSegmentId(5, 2);
    long six = NameUtils.computeSegmentId(6, 3);
    // 0 split to 3 and 4
    scale(SCOPE, stream1, ImmutableMap.of(0L, 1L), Lists.newArrayList(new AbstractMap.SimpleEntry<>(0.0, 1.0 / 6), new AbstractMap.SimpleEntry<>(1.0 / 6, 1.0 / 3)));
    // 4, 1, 2 merged to 5
    scale(SCOPE, stream1, ImmutableMap.of(1L, 1L, 2L, 2L, four, 1L), Lists.newArrayList(new AbstractMap.SimpleEntry<>(1.0 / 6, 1.0)));
    // merge 3, 5 to 6
    scale(SCOPE, stream1, ImmutableMap.of(three, 1L, five, 2L), Lists.newArrayList(new AbstractMap.SimpleEntry<>(0.0, 1.0)));
    assertNotEquals(0, consumer.getCurrentSegments(SCOPE, stream1, 0L).get().size());
    streamMetadataTasks.setRetentionFrequencyMillis(1L);
    streamMetadataTasks.updateSubscriberStreamCut(SCOPE, stream1, subscriber1Name, createResponse1.getConfig().getReaderGroupId(), createResponse1.getConfig().getGeneration(), ImmutableMap.of(0L, 1L, five, -1L), 0L).join();
    streamMetadataTasks.updateSubscriberStreamCut(SCOPE, stream1, subscriber2Name, createResponse2.getConfig().getReaderGroupId(), createResponse2.getConfig().getGeneration(), ImmutableMap.of(0L, 1L, 2L, 1L, five, -1L), 0L).join();
    Map<Long, Long> map1 = new HashMap<>();
    map1.put(six, 2L);
    long size = streamStorePartialMock.getSizeTillStreamCut(SCOPE, stream1, map1, Optional.empty(), null, executor).join();
    doReturn(CompletableFuture.completedFuture(new StreamCutRecord(1L, size, ImmutableMap.copyOf(map1)))).when(streamMetadataTasks).generateStreamCut(anyString(), anyString(), any(), any(), any());
    // call retention and verify that retention policy applies
    streamMetadataTasks.retention(SCOPE, stream1, retentionPolicy, 1L, null, "").join();
    // now retention set has one stream cut 6/2
    // subscriber lowerbound is 0/1, 2/1, 5/-1.. trucation should happen at lowerbound
    VersionedMetadata<StreamTruncationRecord> truncationRecord = streamStorePartialMock.getTruncationRecord(SCOPE, stream1, null, executor).join();
    assertEquals(truncationRecord.getObject().getStreamCut().get(0L).longValue(), 1L);
    assertEquals(truncationRecord.getObject().getStreamCut().get(2L).longValue(), 1L);
    assertEquals(truncationRecord.getObject().getStreamCut().get(five).longValue(), -1L);
    assertTrue(truncationRecord.getObject().isUpdating());
    streamStorePartialMock.completeTruncation(SCOPE, stream1, truncationRecord, null, executor).join();
}
Also used : StreamTruncationRecord(io.pravega.controller.store.stream.records.StreamTruncationRecord) StreamCutImpl(io.pravega.client.stream.impl.StreamCutImpl) HashMap(java.util.HashMap) ArgumentMatchers.anyString(org.mockito.ArgumentMatchers.anyString) Segment(io.pravega.client.segment.impl.Segment) StreamConfiguration(io.pravega.client.stream.StreamConfiguration) Stream(io.pravega.client.stream.Stream) StreamConfigurationRecord(io.pravega.controller.store.stream.records.StreamConfigurationRecord) ControllerEventStreamWriterMock(io.pravega.controller.mocks.ControllerEventStreamWriterMock) EventStreamWriterMock(io.pravega.controller.mocks.EventStreamWriterMock) ReaderGroupConfig(io.pravega.client.stream.ReaderGroupConfig) ScalingPolicy(io.pravega.client.stream.ScalingPolicy) StreamCut(io.pravega.client.stream.StreamCut) Controller(io.pravega.controller.stream.api.grpc.v1.Controller) RetentionPolicy(io.pravega.client.stream.RetentionPolicy) AtomicLong(java.util.concurrent.atomic.AtomicLong) ArgumentMatchers.anyLong(org.mockito.ArgumentMatchers.anyLong) RGStreamCutRecord(io.pravega.shared.controller.event.RGStreamCutRecord) StreamCutRecord(io.pravega.controller.store.stream.records.StreamCutRecord) Test(org.junit.Test)

Aggregations

StreamCutRecord (io.pravega.controller.store.stream.records.StreamCutRecord)16 RetentionPolicy (io.pravega.client.stream.RetentionPolicy)15 ScalingPolicy (io.pravega.client.stream.ScalingPolicy)15 StreamConfiguration (io.pravega.client.stream.StreamConfiguration)15 StreamTruncationRecord (io.pravega.controller.store.stream.records.StreamTruncationRecord)14 Test (org.junit.Test)14 ReaderGroupConfig (io.pravega.client.stream.ReaderGroupConfig)13 Stream (io.pravega.client.stream.Stream)13 HashMap (java.util.HashMap)13 RGStreamCutRecord (io.pravega.shared.controller.event.RGStreamCutRecord)12 AtomicLong (java.util.concurrent.atomic.AtomicLong)12 Segment (io.pravega.client.segment.impl.Segment)11 StreamCut (io.pravega.client.stream.StreamCut)11 StreamCutImpl (io.pravega.client.stream.impl.StreamCutImpl)11 ControllerEventStreamWriterMock (io.pravega.controller.mocks.ControllerEventStreamWriterMock)10 EventStreamWriterMock (io.pravega.controller.mocks.EventStreamWriterMock)10 StreamConfigurationRecord (io.pravega.controller.store.stream.records.StreamConfigurationRecord)10 Controller (io.pravega.controller.stream.api.grpc.v1.Controller)10 ArgumentMatchers.anyLong (org.mockito.ArgumentMatchers.anyLong)10 ArgumentMatchers.anyString (org.mockito.ArgumentMatchers.anyString)10