Search in sources :

Example 11 with TestingCluster

use of org.apache.curator.test.TestingCluster in project druid by druid-io.

the class RemoteTaskRunnerTestUtils method setUp.

void setUp() throws Exception {
    testingCluster = new TestingCluster(1);
    testingCluster.start();
    cf = CuratorFrameworkFactory.builder().connectString(testingCluster.getConnectString()).retryPolicy(new ExponentialBackoffRetry(1, 10)).compressionProvider(new PotentiallyGzippedCompressionProvider(false)).build();
    cf.start();
    cf.blockUntilConnected();
    cf.create().creatingParentsIfNeeded().forPath(basePath);
    cf.create().creatingParentsIfNeeded().forPath(tasksPath);
}
Also used : TestingCluster(org.apache.curator.test.TestingCluster) ExponentialBackoffRetry(org.apache.curator.retry.ExponentialBackoffRetry) PotentiallyGzippedCompressionProvider(io.druid.curator.PotentiallyGzippedCompressionProvider)

Example 12 with TestingCluster

use of org.apache.curator.test.TestingCluster in project druid by druid-io.

the class BatchDataSegmentAnnouncerTest method setUp.

@Before
public void setUp() throws Exception {
    testingCluster = new TestingCluster(1);
    testingCluster.start();
    cf = CuratorFrameworkFactory.builder().connectString(testingCluster.getConnectString()).retryPolicy(new ExponentialBackoffRetry(1, 10)).compressionProvider(new PotentiallyGzippedCompressionProvider(false)).build();
    cf.start();
    cf.blockUntilConnected();
    cf.create().creatingParentsIfNeeded().forPath(testBasePath);
    jsonMapper = new DefaultObjectMapper();
    announcer = new Announcer(cf, MoreExecutors.sameThreadExecutor());
    announcer.start();
    segmentReader = new SegmentReader(cf, jsonMapper);
    skipDimensionsAndMetrics = false;
    skipLoadSpec = false;
    segmentAnnouncer = new BatchDataSegmentAnnouncer(new DruidServerMetadata("id", "host", Long.MAX_VALUE, "type", "tier", 0), new BatchDataSegmentAnnouncerConfig() {

        @Override
        public int getSegmentsPerNode() {
            return 50;
        }

        @Override
        public long getMaxBytesPerNode() {
            return maxBytesPerNode.get();
        }

        @Override
        public boolean isSkipDimensionsAndMetrics() {
            return skipDimensionsAndMetrics;
        }

        @Override
        public boolean isSkipLoadSpec() {
            return skipLoadSpec;
        }
    }, new ZkPathsConfig() {

        @Override
        public String getBase() {
            return testBasePath;
        }
    }, announcer, jsonMapper);
    segmentAnnouncer.start();
    testSegments = Sets.newHashSet();
    for (int i = 0; i < 100; i++) {
        testSegments.add(makeSegment(i));
    }
}
Also used : TestingCluster(org.apache.curator.test.TestingCluster) BatchDataSegmentAnnouncer(io.druid.server.coordination.BatchDataSegmentAnnouncer) Announcer(io.druid.curator.announcement.Announcer) ExponentialBackoffRetry(org.apache.curator.retry.ExponentialBackoffRetry) ZkPathsConfig(io.druid.server.initialization.ZkPathsConfig) BatchDataSegmentAnnouncerConfig(io.druid.server.initialization.BatchDataSegmentAnnouncerConfig) DefaultObjectMapper(io.druid.jackson.DefaultObjectMapper) DruidServerMetadata(io.druid.server.coordination.DruidServerMetadata) PotentiallyGzippedCompressionProvider(io.druid.curator.PotentiallyGzippedCompressionProvider) BatchDataSegmentAnnouncer(io.druid.server.coordination.BatchDataSegmentAnnouncer) Before(org.junit.Before)

Example 13 with TestingCluster

use of org.apache.curator.test.TestingCluster in project hadoop by apache.

the class TestLeaderElectorService method testZKClusterDown.

// 1. rm1 active
// 2. restart zk cluster
// 3. rm1 will first relinquish leadership and re-acquire leadership
@Test
public void testZKClusterDown() throws Exception {
    rm1 = startRM("rm1", HAServiceState.ACTIVE);
    // stop zk cluster
    zkCluster.stop();
    waitFor(rm1, HAServiceState.STANDBY);
    Collection<InstanceSpec> instanceSpecs = zkCluster.getInstances();
    zkCluster = new TestingCluster(instanceSpecs);
    zkCluster.start();
    // rm becomes active again
    waitFor(rm1, HAServiceState.ACTIVE);
}
Also used : InstanceSpec(org.apache.curator.test.InstanceSpec) TestingCluster(org.apache.curator.test.TestingCluster) Test(org.junit.Test)

Example 14 with TestingCluster

use of org.apache.curator.test.TestingCluster in project zipkin by openzipkin.

the class ZooKeeperRule method doEvaluate.

void doEvaluate(Statement base) throws Throwable {
    try {
        cluster = new TestingCluster(3);
        cluster.start();
        client = newClient(cluster.getConnectString(), new RetryOneTime(200));
        client.start();
        checkState(client.blockUntilConnected(5, TimeUnit.SECONDS), "failed to connect to zookeeper in 5 seconds");
        base.evaluate();
    } catch (InterruptedException e) {
        Thread.currentThread().interrupt();
        throw new IllegalStateException("Interrupted while connecting to ZooKeeper", e);
    } finally {
        client.close();
        cluster.close();
    }
}
Also used : TestingCluster(org.apache.curator.test.TestingCluster) RetryOneTime(org.apache.curator.retry.RetryOneTime)

Example 15 with TestingCluster

use of org.apache.curator.test.TestingCluster in project exhibitor by soabase.

the class TestZookeeperConfigProvider method setup.

@BeforeMethod
public void setup() throws Exception {
    timing = new Timing();
    cluster = new TestingCluster(3);
    cluster.start();
    client = CuratorFrameworkFactory.newClient(cluster.getConnectString(), timing.session(), timing.connection(), new RetryOneTime(1));
    client.start();
}
Also used : TestingCluster(org.apache.curator.test.TestingCluster) RetryOneTime(org.apache.curator.retry.RetryOneTime) Timing(org.apache.curator.test.Timing) BeforeMethod(org.testng.annotations.BeforeMethod)

Aggregations

TestingCluster (org.apache.curator.test.TestingCluster)15 Before (org.junit.Before)8 PotentiallyGzippedCompressionProvider (io.druid.curator.PotentiallyGzippedCompressionProvider)6 ExponentialBackoffRetry (org.apache.curator.retry.ExponentialBackoffRetry)6 ZkPathsConfig (io.druid.server.initialization.ZkPathsConfig)4 DefaultObjectMapper (io.druid.jackson.DefaultObjectMapper)3 Announcer (io.druid.curator.announcement.Announcer)2 TestBroker (io.druid.indexing.kafka.test.TestBroker)2 BatchDataSegmentAnnouncer (io.druid.server.coordination.BatchDataSegmentAnnouncer)2 DruidServerMetadata (io.druid.server.coordination.DruidServerMetadata)2 BatchDataSegmentAnnouncerConfig (io.druid.server.initialization.BatchDataSegmentAnnouncerConfig)2 IndexerZkConfig (io.druid.server.initialization.IndexerZkConfig)2 RetryOneTime (org.apache.curator.retry.RetryOneTime)2 InstanceSpec (org.apache.curator.test.InstanceSpec)2 Period (org.joda.time.Period)2 NamedType (com.fasterxml.jackson.databind.jsontype.NamedType)1 Predicate (com.google.common.base.Predicate)1 LoggingEmitter (com.metamx.emitter.core.LoggingEmitter)1 ServiceEmitter (com.metamx.emitter.service.ServiceEmitter)1 BatchServerInventoryView (io.druid.client.BatchServerInventoryView)1