Search in sources :

Example 26 with StreamImpl

use of io.pravega.client.stream.impl.StreamImpl in project pravega by pravega.

the class MetadataScalabilityTest method truncation.

void truncation(ControllerImpl controller, List<List<Segment>> listOfEpochs) {
    int numSegments = getStreamConfig().getScalingPolicy().getMinNumSegments();
    int scalesToPerform = getScalesToPerform();
    Stream stream = new StreamImpl(SCOPE, getStreamName());
    // try SCALES_TO_PERFORM randomly generated stream cuts and truncate stream at those
    // stream cuts.
    List<AtomicInteger> indexes = new LinkedList<>();
    Random rand = new Random();
    for (int i = 0; i < numSegments; i++) {
        indexes.add(new AtomicInteger(1));
    }
    Futures.loop(() -> indexes.stream().allMatch(x -> x.get() < scalesToPerform - 1), () -> {
        // We randomly generate a stream cut in each iteration of this loop. A valid stream
        // cut in this scenario contains for each position i in [0, numSegments -1], a segment
        // from one of the scale epochs of the stream. For each position i, we randomly
        // choose an epoch and pick the segment at position i. It increments the epoch
        // index accordingly (indexes list) so that in the next iteration it chooses a later
        // epoch for the same i.
        // 
        // Because the segment in position i always contain the range [d * (i-1), d * i],
        // where d = 1 / (number of segments), the stream cut is guaranteed to cover
        // the entire key space.
        Map<Segment, Long> map = new HashMap<>();
        for (int i = 0; i < numSegments; i++) {
            AtomicInteger index = indexes.get(i);
            index.set(index.get() + rand.nextInt(scalesToPerform - index.get()));
            map.put(listOfEpochs.get(index.get()).get(i), 0L);
        }
        StreamCut cut = new StreamCutImpl(stream, map);
        log.info("truncating stream at {}", map);
        return controller.truncateStream(SCOPE, streamName, cut).thenCompose(truncated -> {
            log.info("stream truncated successfully at {}", cut);
            assertTrue(truncated);
            // we will just validate that a non empty value is returned.
            return controller.getSuccessors(cut).thenAccept(successors -> {
                assertTrue(successors.getSegments().size() > 0);
                log.info("Successors for streamcut {} are {}", cut, successors);
            });
        });
    }, executorService).join();
}
Also used : Segment(io.pravega.client.segment.impl.Segment) StreamCut(io.pravega.client.stream.StreamCut) StreamImpl(io.pravega.client.stream.impl.StreamImpl) RunWith(org.junit.runner.RunWith) HashMap(java.util.HashMap) Random(java.util.Random) CompletableFuture(java.util.concurrent.CompletableFuture) StreamConfiguration(io.pravega.client.stream.StreamConfiguration) ArrayList(java.util.ArrayList) Lists(com.google.common.collect.Lists) Pair(org.apache.commons.lang3.tuple.Pair) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Stream(io.pravega.client.stream.Stream) StreamCutImpl(io.pravega.client.stream.impl.StreamCutImpl) Map(java.util.Map) After(org.junit.After) Timeout(org.junit.rules.Timeout) URI(java.net.URI) LinkedList(java.util.LinkedList) Before(org.junit.Before) Environment(io.pravega.test.system.framework.Environment) NameUtils(io.pravega.shared.NameUtils) Assert.assertTrue(org.junit.Assert.assertTrue) Collectors(java.util.stream.Collectors) ExecutionException(java.util.concurrent.ExecutionException) List(java.util.List) Slf4j(lombok.extern.slf4j.Slf4j) Rule(org.junit.Rule) ControllerImpl(io.pravega.client.control.impl.ControllerImpl) ExecutorServiceHelpers(io.pravega.common.concurrent.ExecutorServiceHelpers) Comparator(java.util.Comparator) Controller(io.pravega.client.control.impl.Controller) Futures(io.pravega.common.concurrent.Futures) SystemTestRunner(io.pravega.test.system.framework.SystemTestRunner) StreamCut(io.pravega.client.stream.StreamCut) Random(java.util.Random) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) StreamCutImpl(io.pravega.client.stream.impl.StreamCutImpl) StreamImpl(io.pravega.client.stream.impl.StreamImpl) Stream(io.pravega.client.stream.Stream) HashMap(java.util.HashMap) Map(java.util.Map) LinkedList(java.util.LinkedList)

Example 27 with StreamImpl

use of io.pravega.client.stream.impl.StreamImpl in project pravega by pravega.

the class EndToEndChannelLeakTest method testDetectChannelLeakSegmentSealedPooled.

@Test(timeout = 30000)
public void testDetectChannelLeakSegmentSealedPooled() throws Exception {
    StreamConfiguration config = StreamConfiguration.builder().scalingPolicy(ScalingPolicy.fixed(1)).build();
    Controller controller = controllerWrapper.getController();
    controllerWrapper.getControllerService().createScope(SCOPE, 0L).get();
    controller.createStream(SCOPE, STREAM_NAME, config).get();
    // Set the max number connections to verify channel creation behaviour
    final ClientConfig clientConfig = ClientConfig.builder().maxConnectionsPerSegmentStore(5).build();
    @Cleanup SocketConnectionFactoryImpl connectionFactory = new SocketConnectionFactoryImpl(clientConfig, new InlineExecutor());
    @Cleanup ConnectionPoolImpl connectionPool = new ConnectionPoolImpl(clientConfig, connectionFactory);
    @Cleanup ClientFactoryImpl clientFactory = new ClientFactoryImpl(SCOPE, controller, connectionPool);
    // Create a writer.
    @Cleanup EventStreamWriter<String> writer = clientFactory.createEventWriter(SCOPE, serializer, writerConfig);
    // Write an event.
    writer.writeEvent("0", "zero").get();
    assertChannelCount(1, connectionPool, connectionFactory);
    @Cleanup ReaderGroupManager groupManager = new ReaderGroupManagerImpl(SCOPE, controller, clientFactory);
    groupManager.createReaderGroup(READER_GROUP, ReaderGroupConfig.builder().disableAutomaticCheckpoints().groupRefreshTimeMillis(0).stream(Stream.of(SCOPE, STREAM_NAME)).build());
    @Cleanup EventStreamReader<String> reader1 = clientFactory.createReader("readerId1", READER_GROUP, serializer, ReaderConfig.builder().disableTimeWindows(true).build());
    // Read an event.
    EventRead<String> event = reader1.readNextEvent(10000);
    assertEquals("zero", event.getEvent());
    // scale
    Stream stream = new StreamImpl(SCOPE, SCOPE);
    Map<Double, Double> map = new HashMap<>();
    map.put(0.0, 0.33);
    map.put(0.33, 0.66);
    map.put(0.66, 1.0);
    Boolean result = controller.scaleStream(stream, Collections.singletonList(0L), map, executor).getFuture().get();
    assertTrue(result);
    event = reader1.readNextEvent(0);
    assertNull(event.getEvent());
    @Cleanup ReaderGroup readerGroup = groupManager.getReaderGroup(READER_GROUP);
    readerGroup.initiateCheckpoint("cp", executor);
    event = reader1.readNextEvent(5000);
    assertEquals("cp", event.getCheckpointName());
    // Write more events.
    writer.writeEvent("0", "one").get();
    writer.writeEvent("0", "two").get();
    writer.writeEvent("1", "three").get();
    event = reader1.readNextEvent(10000);
    assertNotNull(event.getEvent());
    assertChannelCount(5, connectionPool, connectionFactory);
    event = reader1.readNextEvent(10000);
    assertNotNull(event.getEvent());
    assertChannelCount(5, connectionPool, connectionFactory);
    event = reader1.readNextEvent(10000);
    assertNotNull(event.getEvent());
    assertChannelCount(5, connectionPool, connectionFactory);
}
Also used : ReaderGroupManager(io.pravega.client.admin.ReaderGroupManager) HashMap(java.util.HashMap) ReaderGroup(io.pravega.client.stream.ReaderGroup) ConnectionPoolImpl(io.pravega.client.connection.impl.ConnectionPoolImpl) Controller(io.pravega.client.control.impl.Controller) SocketConnectionFactoryImpl(io.pravega.client.connection.impl.SocketConnectionFactoryImpl) Cleanup(lombok.Cleanup) ClientFactoryImpl(io.pravega.client.stream.impl.ClientFactoryImpl) InlineExecutor(io.pravega.test.common.InlineExecutor) StreamImpl(io.pravega.client.stream.impl.StreamImpl) StreamConfiguration(io.pravega.client.stream.StreamConfiguration) Stream(io.pravega.client.stream.Stream) ClientConfig(io.pravega.client.ClientConfig) ReaderGroupManagerImpl(io.pravega.client.admin.impl.ReaderGroupManagerImpl) Test(org.junit.Test)

Example 28 with StreamImpl

use of io.pravega.client.stream.impl.StreamImpl in project pravega by pravega.

the class EndToEndChannelLeakTest method testDetectChannelLeakSegmentSealed.

@Test(timeout = 30000)
public void testDetectChannelLeakSegmentSealed() throws Exception {
    StreamConfiguration config = StreamConfiguration.builder().scalingPolicy(ScalingPolicy.fixed(1)).build();
    Controller controller = controllerWrapper.getController();
    controllerWrapper.getControllerService().createScope(SCOPE, 0L).get();
    controller.createStream(SCOPE, STREAM_NAME, config).get();
    // Set the max number connections to verify channel creation behaviour
    final ClientConfig clientConfig = ClientConfig.builder().maxConnectionsPerSegmentStore(500).build();
    @Cleanup SocketConnectionFactoryImpl connectionFactory = new SocketConnectionFactoryImpl(clientConfig, executor);
    @Cleanup ConnectionPoolImpl connectionPool = new ConnectionPoolImpl(clientConfig, connectionFactory);
    @Cleanup ClientFactoryImpl clientFactory = new ClientFactoryImpl(SCOPE, controller, connectionPool);
    int channelCount = 0;
    assertChannelCount(channelCount, connectionPool, connectionFactory);
    @Cleanup ReaderGroupManager groupManager = new ReaderGroupManagerImpl(SCOPE, controller, clientFactory);
    groupManager.createReaderGroup(READER_GROUP, ReaderGroupConfig.builder().disableAutomaticCheckpoints().groupRefreshTimeMillis(0).stream(Stream.of(SCOPE, STREAM_NAME)).build());
    // Should not add any connections
    assertChannelCount(channelCount, connectionPool, connectionFactory);
    // Create a writer.
    @Cleanup EventStreamWriter<String> writer = clientFactory.createEventWriter(SCOPE, serializer, writerConfig);
    // Write an event.
    writer.writeEvent("0", "zero").get();
    channelCount += 1;
    assertChannelCount(channelCount, connectionPool, connectionFactory);
    @Cleanup EventStreamReader<String> reader1 = clientFactory.createReader("readerId1", READER_GROUP, serializer, ReaderConfig.builder().disableTimeWindows(true).build());
    // One for segment 3 for state synchronizer
    channelCount += 4;
    assertChannelCount(channelCount, connectionPool, connectionFactory);
    // Read an event.
    EventRead<String> event = reader1.readNextEvent(10000);
    assertEquals("zero", event.getEvent());
    channelCount += 1;
    assertChannelCount(channelCount, connectionPool, connectionFactory);
    // scale
    Stream stream = new StreamImpl(SCOPE, SCOPE);
    Map<Double, Double> map = new HashMap<>();
    map.put(0.0, 0.33);
    map.put(0.33, 0.66);
    map.put(0.66, 1.0);
    Boolean result = controller.scaleStream(stream, Collections.singletonList(0L), map, executor).getFuture().get();
    assertTrue(result);
    event = reader1.readNextEvent(0);
    assertNull(event.getEvent());
    // Reader should see EOS
    channelCount -= 1;
    assertChannelCount(channelCount, connectionPool, connectionFactory);
    // should detect end of segment
    writer.writeEvent("1", "one").get();
    // Close one segment open 3.
    channelCount += 2;
    assertChannelCount(channelCount, connectionPool, connectionFactory);
    ReaderGroup readerGroup = groupManager.getReaderGroup(READER_GROUP);
    readerGroup.getMetrics().unreadBytes();
    CompletableFuture<Checkpoint> future = readerGroup.initiateCheckpoint("cp1", executor);
    // 3 more from the state synchronizer
    channelCount += 4;
    assertChannelCount(channelCount, connectionPool, connectionFactory);
    event = reader1.readNextEvent(5000);
    assertEquals("cp1", event.getCheckpointName());
    event = reader1.readNextEvent(10000);
    assertEquals("one", event.getEvent());
    // From new segments on reader
    channelCount += 3;
    assertChannelCount(channelCount, connectionPool, connectionFactory);
    future.join();
    // Checkpoint should close connections back down
    readerGroup.close();
    channelCount -= 4;
    assertChannelCount(channelCount, connectionPool, connectionFactory);
    // Write more events.
    writer.writeEvent("2", "two").get();
    writer.writeEvent("3", "three").get();
    writer.writeEvent("4", "four").get();
    // no changes to socket count.
    assertChannelCount(channelCount, connectionPool, connectionFactory);
    event = reader1.readNextEvent(10000);
    assertNotNull(event.getEvent());
    // no changes to socket count.
    assertChannelCount(channelCount, connectionPool, connectionFactory);
    reader1.close();
    // 3 from segments 4 from group state.
    channelCount -= 7;
    assertChannelCount(channelCount, connectionPool, connectionFactory);
    groupManager.close();
    writer.close();
    assertChannelCount(0, connectionPool, connectionFactory);
}
Also used : ReaderGroupManager(io.pravega.client.admin.ReaderGroupManager) HashMap(java.util.HashMap) ReaderGroup(io.pravega.client.stream.ReaderGroup) ConnectionPoolImpl(io.pravega.client.connection.impl.ConnectionPoolImpl) Controller(io.pravega.client.control.impl.Controller) SocketConnectionFactoryImpl(io.pravega.client.connection.impl.SocketConnectionFactoryImpl) Cleanup(lombok.Cleanup) Checkpoint(io.pravega.client.stream.Checkpoint) ClientFactoryImpl(io.pravega.client.stream.impl.ClientFactoryImpl) Checkpoint(io.pravega.client.stream.Checkpoint) StreamImpl(io.pravega.client.stream.impl.StreamImpl) StreamConfiguration(io.pravega.client.stream.StreamConfiguration) Stream(io.pravega.client.stream.Stream) ClientConfig(io.pravega.client.ClientConfig) ReaderGroupManagerImpl(io.pravega.client.admin.impl.ReaderGroupManagerImpl) Test(org.junit.Test)

Example 29 with StreamImpl

use of io.pravega.client.stream.impl.StreamImpl in project pravega by pravega.

the class EndToEndChannelLeakTest method testDetectChannelLeakMultiReaderPooled.

@Test(timeout = 30000)
public void testDetectChannelLeakMultiReaderPooled() throws Exception {
    StreamConfiguration config = StreamConfiguration.builder().scalingPolicy(ScalingPolicy.byEventRate(10, 2, 1)).build();
    // Set the max number connections to verify channel creation behaviour
    final ClientConfig clientConfig = ClientConfig.builder().maxConnectionsPerSegmentStore(5).build();
    Controller controller = controllerWrapper.getController();
    controllerWrapper.getControllerService().createScope(SCOPE, 0L).get();
    controller.createStream(SCOPE, STREAM_NAME, config).get();
    @Cleanup SocketConnectionFactoryImpl connectionFactory = new SocketConnectionFactoryImpl(clientConfig, executor);
    @Cleanup ConnectionPoolImpl connectionPool = new ConnectionPoolImpl(clientConfig, connectionFactory);
    @Cleanup ClientFactoryImpl clientFactory = new ClientFactoryImpl(SCOPE, controller, connectionPool);
    // open socket count.
    int expectedChannelCount = 0;
    assertChannelCount(expectedChannelCount, connectionPool, connectionFactory);
    // Create a writer and write an event.
    @Cleanup EventStreamWriter<String> writer = clientFactory.createEventWriter(STREAM_NAME, serializer, writerConfig);
    writer.writeEvent("0", "zero").get();
    // connection to segment 0.
    expectedChannelCount += 1;
    assertChannelCount(expectedChannelCount, connectionPool, connectionFactory);
    @Cleanup ReaderGroupManager groupManager = new ReaderGroupManagerImpl(SCOPE, controller, clientFactory);
    // no changes expected.
    assertChannelCount(expectedChannelCount, connectionPool, connectionFactory);
    groupManager.createReaderGroup(READER_GROUP, ReaderGroupConfig.builder().disableAutomaticCheckpoints().groupRefreshTimeMillis(0).stream(Stream.of(SCOPE, STREAM_NAME)).build());
    // create a reader and read an event.
    @Cleanup EventStreamReader<String> reader1 = clientFactory.createReader("readerId1", READER_GROUP, serializer, ReaderConfig.builder().disableTimeWindows(true).build());
    // Creating a reader spawns a revisioned stream client which opens 4 sockets ( read, write, metadataClient and conditionalUpdates).
    EventRead<String> event = reader1.readNextEvent(10000);
    // reader creates a new connection to the segment 0;
    assertEquals("zero", event.getEvent());
    // Connection to segment 0 does not cause an increase in number of open connections since we have reached the maxConnection count.
    assertChannelCount(5, connectionPool, connectionFactory);
    // scale
    Stream stream = new StreamImpl(SCOPE, STREAM_NAME);
    Map<Double, Double> map = new HashMap<>();
    map.put(0.0, 0.33);
    map.put(0.33, 0.66);
    map.put(0.66, 1.0);
    Boolean result = controller.scaleStream(stream, Collections.singletonList(0L), map, executor).getFuture().get();
    assertTrue(result);
    // No changes to the channel count.
    assertChannelCount(5, connectionPool, connectionFactory);
    // Reaches EOS
    event = reader1.readNextEvent(1000);
    assertNull(event.getEvent());
    // Write more events.
    writer.writeEvent("1", "one").get();
    writer.writeEvent("2", "two").get();
    writer.writeEvent("3", "three").get();
    writer.writeEvent("4", "four").get();
    writer.writeEvent("5", "five").get();
    writer.writeEvent("6", "six").get();
    // 2 new flows  are opened.(+3 connections to the segments 1,2,3 after scale by the writer,
    // -1 flow to segment 0 which is sealed.)
    assertChannelCount(5, connectionPool, connectionFactory);
    ReaderGroup readerGroup = groupManager.getReaderGroup(READER_GROUP);
    CompletableFuture<Checkpoint> future = readerGroup.initiateCheckpoint("cp1", executor);
    // 4 more from the state synchronizer
    assertChannelCount(5, connectionPool, connectionFactory);
    event = reader1.readNextEvent(5000);
    assertEquals("cp1", event.getCheckpointName());
    event = reader1.readNextEvent(5000);
    assertNotNull(event.getEvent());
    future.join();
    // Checkpoint should close connections back down
    readerGroup.close();
    assertChannelCount(5, connectionPool, connectionFactory);
    event = reader1.readNextEvent(10000);
    assertNotNull(event.getEvent());
    assertChannelCount(5, connectionPool, connectionFactory);
}
Also used : ReaderGroupManager(io.pravega.client.admin.ReaderGroupManager) HashMap(java.util.HashMap) ReaderGroup(io.pravega.client.stream.ReaderGroup) ConnectionPoolImpl(io.pravega.client.connection.impl.ConnectionPoolImpl) Controller(io.pravega.client.control.impl.Controller) SocketConnectionFactoryImpl(io.pravega.client.connection.impl.SocketConnectionFactoryImpl) Cleanup(lombok.Cleanup) Checkpoint(io.pravega.client.stream.Checkpoint) ClientFactoryImpl(io.pravega.client.stream.impl.ClientFactoryImpl) Checkpoint(io.pravega.client.stream.Checkpoint) StreamImpl(io.pravega.client.stream.impl.StreamImpl) StreamConfiguration(io.pravega.client.stream.StreamConfiguration) Stream(io.pravega.client.stream.Stream) ClientConfig(io.pravega.client.ClientConfig) ReaderGroupManagerImpl(io.pravega.client.admin.impl.ReaderGroupManagerImpl) Test(org.junit.Test)

Example 30 with StreamImpl

use of io.pravega.client.stream.impl.StreamImpl in project pravega by pravega.

the class EndToEndChannelLeakTest method testDetectChannelLeakMultiReader.

@Test(timeout = 30000)
public void testDetectChannelLeakMultiReader() throws Exception {
    StreamConfiguration config = StreamConfiguration.builder().scalingPolicy(ScalingPolicy.byEventRate(10, 2, 1)).build();
    // Set the max number connections to verify channel creation behaviour
    final ClientConfig clientConfig = ClientConfig.builder().maxConnectionsPerSegmentStore(500).build();
    Controller controller = controllerWrapper.getController();
    controllerWrapper.getControllerService().createScope(SCOPE, 0L).get();
    controller.createStream(SCOPE, STREAM_NAME, config).get();
    @Cleanup SocketConnectionFactoryImpl connectionFactory = new SocketConnectionFactoryImpl(clientConfig, new InlineExecutor());
    @Cleanup ConnectionPoolImpl connectionPool = new ConnectionPoolImpl(clientConfig, connectionFactory);
    @Cleanup ClientFactoryImpl clientFactory = new ClientFactoryImpl(SCOPE, controller, connectionPool);
    // open socket count.
    int expectedChannelCount = 0;
    assertChannelCount(expectedChannelCount, connectionPool, connectionFactory);
    // Create a writer and write an event.
    @Cleanup EventStreamWriter<String> writer = clientFactory.createEventWriter(STREAM_NAME, serializer, writerConfig);
    writer.writeEvent("0", "zero").get();
    // connection to segment 0.
    expectedChannelCount += 1;
    assertChannelCount(expectedChannelCount, connectionPool, connectionFactory);
    @Cleanup ReaderGroupManager groupManager = new ReaderGroupManagerImpl(SCOPE, controller, clientFactory);
    // no changes expected.
    assertChannelCount(expectedChannelCount, connectionPool, connectionFactory);
    groupManager.createReaderGroup(READER_GROUP, ReaderGroupConfig.builder().disableAutomaticCheckpoints().groupRefreshTimeMillis(0).stream(Stream.of(SCOPE, STREAM_NAME)).build());
    // create a reader and read an event.
    @Cleanup EventStreamReader<String> reader1 = clientFactory.createReader("readerId1", READER_GROUP, serializer, ReaderConfig.builder().disableTimeWindows(true).build());
    // Creating a reader spawns a revisioned stream client which opens 4 sockets ( read, write, metadataClient and conditionalUpdates).
    expectedChannelCount += 4;
    EventRead<String> event = reader1.readNextEvent(10000);
    // reader creates a new connection to the segment 0;
    expectedChannelCount += 1;
    assertEquals("zero", event.getEvent());
    assertChannelCount(expectedChannelCount, connectionPool, connectionFactory);
    // scale
    Stream stream = new StreamImpl(SCOPE, STREAM_NAME);
    Map<Double, Double> map = new HashMap<>();
    map.put(0.0, 0.33);
    map.put(0.33, 0.66);
    map.put(0.66, 1.0);
    Boolean result = controller.scaleStream(stream, Collections.singletonList(0L), map, executor).getFuture().get();
    assertTrue(result);
    // No changes to the channel count.
    assertChannelCount(expectedChannelCount, connectionPool, connectionFactory);
    event = reader1.readNextEvent(0);
    assertNull(event.getEvent());
    event = reader1.readNextEvent(0);
    assertNull(event.getEvent());
    // should decrease channel count from close connection
    expectedChannelCount -= 1;
    assertChannelCount(expectedChannelCount, connectionPool, connectionFactory);
    // Write more events.
    writer.writeEvent("1", "one").get();
    writer.writeEvent("2", "two").get();
    writer.writeEvent("3", "three").get();
    writer.writeEvent("4", "four").get();
    writer.writeEvent("5", "five").get();
    writer.writeEvent("6", "six").get();
    // Open 3 new segments close one old one.
    expectedChannelCount += 2;
    assertChannelCount(expectedChannelCount, connectionPool, connectionFactory);
    ReaderGroup readerGroup = groupManager.getReaderGroup(READER_GROUP);
    CompletableFuture<Checkpoint> future = readerGroup.initiateCheckpoint("cp1", executor);
    // 4 more from the state synchronizer
    expectedChannelCount += 4;
    assertChannelCount(expectedChannelCount, connectionPool, connectionFactory);
    event = reader1.readNextEvent(5000);
    assertEquals("cp1", event.getCheckpointName());
    // Add a new reader
    @Cleanup EventStreamReader<String> reader2 = clientFactory.createReader("readerId2", READER_GROUP, serializer, ReaderConfig.builder().disableTimeWindows(true).build());
    // Creating a reader spawns a revisioned stream client which opens 4 sockets ( read, write, metadataClient and conditionalUpdates).
    expectedChannelCount += 4;
    assertChannelCount(expectedChannelCount, connectionPool, connectionFactory);
    event = reader1.readNextEvent(5000);
    assertNotNull(event.getEvent());
    event = reader2.readNextEvent(5000);
    assertNotNull(event.getEvent());
    // 3 more from the new segments
    expectedChannelCount += 3;
    assertChannelCount(expectedChannelCount, connectionPool, connectionFactory);
    future.join();
    // Checkpoint should close connections back down
    readerGroup.close();
    expectedChannelCount -= 4;
    assertChannelCount(expectedChannelCount, connectionPool, connectionFactory);
    reader1.close();
    reader2.close();
    expectedChannelCount -= 8 + 3;
    assertChannelCount(expectedChannelCount, connectionPool, connectionFactory);
}
Also used : ReaderGroupManager(io.pravega.client.admin.ReaderGroupManager) HashMap(java.util.HashMap) ReaderGroup(io.pravega.client.stream.ReaderGroup) ConnectionPoolImpl(io.pravega.client.connection.impl.ConnectionPoolImpl) Controller(io.pravega.client.control.impl.Controller) SocketConnectionFactoryImpl(io.pravega.client.connection.impl.SocketConnectionFactoryImpl) Cleanup(lombok.Cleanup) Checkpoint(io.pravega.client.stream.Checkpoint) ClientFactoryImpl(io.pravega.client.stream.impl.ClientFactoryImpl) Checkpoint(io.pravega.client.stream.Checkpoint) InlineExecutor(io.pravega.test.common.InlineExecutor) StreamImpl(io.pravega.client.stream.impl.StreamImpl) StreamConfiguration(io.pravega.client.stream.StreamConfiguration) Stream(io.pravega.client.stream.Stream) ClientConfig(io.pravega.client.ClientConfig) ReaderGroupManagerImpl(io.pravega.client.admin.impl.ReaderGroupManagerImpl) Test(org.junit.Test)

Aggregations

StreamImpl (io.pravega.client.stream.impl.StreamImpl)74 Test (org.junit.Test)50 Stream (io.pravega.client.stream.Stream)47 Cleanup (lombok.Cleanup)36 StreamConfiguration (io.pravega.client.stream.StreamConfiguration)32 HashMap (java.util.HashMap)32 ClientFactoryImpl (io.pravega.client.stream.impl.ClientFactoryImpl)22 Map (java.util.Map)22 ReaderGroupManager (io.pravega.client.admin.ReaderGroupManager)21 SocketConnectionFactoryImpl (io.pravega.client.connection.impl.SocketConnectionFactoryImpl)21 Controller (io.pravega.client.control.impl.Controller)21 ClientConfig (io.pravega.client.ClientConfig)20 ReaderGroupManagerImpl (io.pravega.client.admin.impl.ReaderGroupManagerImpl)18 Segment (io.pravega.client.segment.impl.Segment)18 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)18 ConnectionFactory (io.pravega.client.connection.impl.ConnectionFactory)16 Slf4j (lombok.extern.slf4j.Slf4j)14 ScalingPolicy (io.pravega.client.stream.ScalingPolicy)13 CompletableFuture (java.util.concurrent.CompletableFuture)12 Before (org.junit.Before)12