Search in sources :

Example 1 with PotentiallyGzippedCompressionProvider

use of io.druid.curator.PotentiallyGzippedCompressionProvider in project druid by druid-io.

the class RemoteTaskRunnerFactoryTest method setUp.

@Before
public void setUp() throws Exception {
    TestUtils testUtils = new TestUtils();
    jsonMapper = testUtils.getTestObjectMapper();
    testingCluster = new TestingCluster(1);
    testingCluster.start();
    cf = CuratorFrameworkFactory.builder().connectString(testingCluster.getConnectString()).retryPolicy(new ExponentialBackoffRetry(1, 10)).compressionProvider(new PotentiallyGzippedCompressionProvider(false)).build();
    cf.start();
    cf.blockUntilConnected();
}
Also used : TestUtils(io.druid.indexing.common.TestUtils) TestingCluster(org.apache.curator.test.TestingCluster) ExponentialBackoffRetry(org.apache.curator.retry.ExponentialBackoffRetry) PotentiallyGzippedCompressionProvider(io.druid.curator.PotentiallyGzippedCompressionProvider) Before(org.junit.Before)

Example 2 with PotentiallyGzippedCompressionProvider

use of io.druid.curator.PotentiallyGzippedCompressionProvider in project druid by druid-io.

the class WorkerTaskMonitorTest method setUp.

@Before
public void setUp() throws Exception {
    testingCluster = new TestingCluster(1);
    testingCluster.start();
    cf = CuratorFrameworkFactory.builder().connectString(testingCluster.getConnectString()).retryPolicy(new ExponentialBackoffRetry(1, 10)).compressionProvider(new PotentiallyGzippedCompressionProvider(false)).build();
    cf.start();
    cf.blockUntilConnected();
    cf.create().creatingParentsIfNeeded().forPath(basePath);
    worker = new Worker("worker", "localhost", 3, "0");
    workerCuratorCoordinator = new WorkerCuratorCoordinator(jsonMapper, new IndexerZkConfig(new ZkPathsConfig() {

        @Override
        public String getBase() {
            return basePath;
        }
    }, null, null, null, null, null), new TestRemoteTaskRunnerConfig(new Period("PT1S")), cf, worker);
    workerCuratorCoordinator.start();
    // Start a task monitor
    workerTaskMonitor = createTaskMonitor();
    TestTasks.registerSubtypes(jsonMapper);
    jsonMapper.registerSubtypes(new NamedType(TestRealtimeTask.class, "test_realtime"));
    workerTaskMonitor.start();
    task = TestTasks.immediateSuccess("test");
}
Also used : IndexerZkConfig(io.druid.server.initialization.IndexerZkConfig) TestRemoteTaskRunnerConfig(io.druid.indexing.overlord.TestRemoteTaskRunnerConfig) ExponentialBackoffRetry(org.apache.curator.retry.ExponentialBackoffRetry) NamedType(com.fasterxml.jackson.databind.jsontype.NamedType) Period(org.joda.time.Period) PotentiallyGzippedCompressionProvider(io.druid.curator.PotentiallyGzippedCompressionProvider) TestRealtimeTask(io.druid.indexing.common.TestRealtimeTask) TestingCluster(org.apache.curator.test.TestingCluster) ZkPathsConfig(io.druid.server.initialization.ZkPathsConfig) Before(org.junit.Before)

Example 3 with PotentiallyGzippedCompressionProvider

use of io.druid.curator.PotentiallyGzippedCompressionProvider in project druid by druid-io.

the class WorkerResourceTest method setUp.

@Before
public void setUp() throws Exception {
    testingCluster = new TestingCluster(1);
    testingCluster.start();
    cf = CuratorFrameworkFactory.builder().connectString(testingCluster.getConnectString()).retryPolicy(new ExponentialBackoffRetry(1, 10)).compressionProvider(new PotentiallyGzippedCompressionProvider(false)).build();
    cf.start();
    cf.blockUntilConnected();
    cf.create().creatingParentsIfNeeded().forPath(basePath);
    worker = new Worker("host", "ip", 3, "v1");
    curatorCoordinator = new WorkerCuratorCoordinator(jsonMapper, new IndexerZkConfig(new ZkPathsConfig() {

        @Override
        public String getBase() {
            return basePath;
        }
    }, null, null, null, null, null), new RemoteTaskRunnerConfig(), cf, worker);
    curatorCoordinator.start();
    workerResource = new WorkerResource(worker, curatorCoordinator, null);
}
Also used : IndexerZkConfig(io.druid.server.initialization.IndexerZkConfig) TestingCluster(org.apache.curator.test.TestingCluster) ExponentialBackoffRetry(org.apache.curator.retry.ExponentialBackoffRetry) WorkerCuratorCoordinator(io.druid.indexing.worker.WorkerCuratorCoordinator) ZkPathsConfig(io.druid.server.initialization.ZkPathsConfig) Worker(io.druid.indexing.worker.Worker) PotentiallyGzippedCompressionProvider(io.druid.curator.PotentiallyGzippedCompressionProvider) RemoteTaskRunnerConfig(io.druid.indexing.overlord.config.RemoteTaskRunnerConfig) Before(org.junit.Before)

Example 4 with PotentiallyGzippedCompressionProvider

use of io.druid.curator.PotentiallyGzippedCompressionProvider in project druid by druid-io.

the class BatchServerInventoryViewTest method setUp.

@Before
public void setUp() throws Exception {
    testingCluster = new TestingCluster(1);
    testingCluster.start();
    cf = CuratorFrameworkFactory.builder().connectString(testingCluster.getConnectString()).retryPolicy(new ExponentialBackoffRetry(1, 10)).compressionProvider(new PotentiallyGzippedCompressionProvider(true)).build();
    cf.start();
    cf.blockUntilConnected();
    cf.create().creatingParentsIfNeeded().forPath(testBasePath);
    jsonMapper = new DefaultObjectMapper();
    announcer = new Announcer(cf, MoreExecutors.sameThreadExecutor());
    announcer.start();
    segmentAnnouncer = new BatchDataSegmentAnnouncer(new DruidServerMetadata("id", "host", Long.MAX_VALUE, "type", "tier", 0), new BatchDataSegmentAnnouncerConfig() {

        @Override
        public int getSegmentsPerNode() {
            return 50;
        }
    }, new ZkPathsConfig() {

        @Override
        public String getBase() {
            return testBasePath;
        }
    }, announcer, jsonMapper);
    segmentAnnouncer.start();
    testSegments = Sets.newConcurrentHashSet();
    for (int i = 0; i < INITIAL_SEGMENTS; i++) {
        testSegments.add(makeSegment(i));
    }
    batchServerInventoryView = new BatchServerInventoryView(new ZkPathsConfig() {

        @Override
        public String getBase() {
            return testBasePath;
        }
    }, cf, jsonMapper, Predicates.<Pair<DruidServerMetadata, DataSegment>>alwaysTrue());
    batchServerInventoryView.start();
    inventoryUpdateCounter.set(0);
    filteredBatchServerInventoryView = new BatchServerInventoryView(new ZkPathsConfig() {

        @Override
        public String getBase() {
            return testBasePath;
        }
    }, cf, jsonMapper, new Predicate<Pair<DruidServerMetadata, DataSegment>>() {

        @Override
        public boolean apply(@Nullable Pair<DruidServerMetadata, DataSegment> input) {
            return input.rhs.getInterval().getStart().isBefore(SEGMENT_INTERVAL_START.plusDays(INITIAL_SEGMENTS));
        }
    }) {

        @Override
        protected DruidServer addInnerInventory(DruidServer container, String inventoryKey, Set<DataSegment> inventory) {
            DruidServer server = super.addInnerInventory(container, inventoryKey, inventory);
            inventoryUpdateCounter.incrementAndGet();
            return server;
        }
    };
    filteredBatchServerInventoryView.start();
}
Also used : BatchServerInventoryView(io.druid.client.BatchServerInventoryView) ExponentialBackoffRetry(org.apache.curator.retry.ExponentialBackoffRetry) BatchDataSegmentAnnouncerConfig(io.druid.server.initialization.BatchDataSegmentAnnouncerConfig) DruidServer(io.druid.client.DruidServer) DruidServerMetadata(io.druid.server.coordination.DruidServerMetadata) PotentiallyGzippedCompressionProvider(io.druid.curator.PotentiallyGzippedCompressionProvider) DataSegment(io.druid.timeline.DataSegment) Predicate(com.google.common.base.Predicate) TestingCluster(org.apache.curator.test.TestingCluster) BatchDataSegmentAnnouncer(io.druid.server.coordination.BatchDataSegmentAnnouncer) Announcer(io.druid.curator.announcement.Announcer) ZkPathsConfig(io.druid.server.initialization.ZkPathsConfig) DefaultObjectMapper(io.druid.jackson.DefaultObjectMapper) BatchDataSegmentAnnouncer(io.druid.server.coordination.BatchDataSegmentAnnouncer) Nullable(javax.annotation.Nullable) Pair(io.druid.java.util.common.Pair) Before(org.junit.Before)

Example 5 with PotentiallyGzippedCompressionProvider

use of io.druid.curator.PotentiallyGzippedCompressionProvider in project druid by druid-io.

the class RemoteTaskRunnerTestUtils method setUp.

void setUp() throws Exception {
    testingCluster = new TestingCluster(1);
    testingCluster.start();
    cf = CuratorFrameworkFactory.builder().connectString(testingCluster.getConnectString()).retryPolicy(new ExponentialBackoffRetry(1, 10)).compressionProvider(new PotentiallyGzippedCompressionProvider(false)).build();
    cf.start();
    cf.blockUntilConnected();
    cf.create().creatingParentsIfNeeded().forPath(basePath);
    cf.create().creatingParentsIfNeeded().forPath(tasksPath);
}
Also used : TestingCluster(org.apache.curator.test.TestingCluster) ExponentialBackoffRetry(org.apache.curator.retry.ExponentialBackoffRetry) PotentiallyGzippedCompressionProvider(io.druid.curator.PotentiallyGzippedCompressionProvider)

Aggregations

PotentiallyGzippedCompressionProvider (io.druid.curator.PotentiallyGzippedCompressionProvider)7 ExponentialBackoffRetry (org.apache.curator.retry.ExponentialBackoffRetry)6 TestingCluster (org.apache.curator.test.TestingCluster)6 Before (org.junit.Before)5 ZkPathsConfig (io.druid.server.initialization.ZkPathsConfig)4 Announcer (io.druid.curator.announcement.Announcer)2 DefaultObjectMapper (io.druid.jackson.DefaultObjectMapper)2 BatchDataSegmentAnnouncer (io.druid.server.coordination.BatchDataSegmentAnnouncer)2 DruidServerMetadata (io.druid.server.coordination.DruidServerMetadata)2 BatchDataSegmentAnnouncerConfig (io.druid.server.initialization.BatchDataSegmentAnnouncerConfig)2 IndexerZkConfig (io.druid.server.initialization.IndexerZkConfig)2 NamedType (com.fasterxml.jackson.databind.jsontype.NamedType)1 Predicate (com.google.common.base.Predicate)1 BatchServerInventoryView (io.druid.client.BatchServerInventoryView)1 DruidServer (io.druid.client.DruidServer)1 TestRealtimeTask (io.druid.indexing.common.TestRealtimeTask)1 TestUtils (io.druid.indexing.common.TestUtils)1 TestRemoteTaskRunnerConfig (io.druid.indexing.overlord.TestRemoteTaskRunnerConfig)1 RemoteTaskRunnerConfig (io.druid.indexing.overlord.config.RemoteTaskRunnerConfig)1 Worker (io.druid.indexing.worker.Worker)1