Search in sources :

Example 41 with DataSegment

use of io.druid.timeline.DataSegment in project druid by druid-io.

the class DataSegmentTest method testIdentifierWithZeroPartition.

@Test
public void testIdentifierWithZeroPartition() {
    final DataSegment segment = DataSegment.builder().dataSource("foo").interval(new Interval("2012-01-01/2012-01-02")).version(new DateTime("2012-01-01T11:22:33.444Z").toString()).shardSpec(new SingleDimensionShardSpec("bar", null, "abc", 0)).build();
    Assert.assertEquals("foo_2012-01-01T00:00:00.000Z_2012-01-02T00:00:00.000Z_2012-01-01T11:22:33.444Z", segment.getIdentifier());
}
Also used : DataSegment(io.druid.timeline.DataSegment) SingleDimensionShardSpec(io.druid.timeline.partition.SingleDimensionShardSpec) DateTime(org.joda.time.DateTime) Interval(org.joda.time.Interval) Test(org.junit.Test)

Example 42 with DataSegment

use of io.druid.timeline.DataSegment in project druid by druid-io.

the class DataSegmentTest method testV1SerializationNullMetrics.

@Test
public void testV1SerializationNullMetrics() throws Exception {
    final DataSegment segment = DataSegment.builder().dataSource("foo").interval(new Interval("2012-01-01/2012-01-02")).version(new DateTime("2012-01-01T11:22:33.444Z").toString()).build();
    final DataSegment segment2 = mapper.readValue(mapper.writeValueAsString(segment), DataSegment.class);
    Assert.assertEquals("empty dimensions", ImmutableList.of(), segment2.getDimensions());
    Assert.assertEquals("empty metrics", ImmutableList.of(), segment2.getMetrics());
}
Also used : DataSegment(io.druid.timeline.DataSegment) DateTime(org.joda.time.DateTime) Interval(org.joda.time.Interval) Test(org.junit.Test)

Example 43 with DataSegment

use of io.druid.timeline.DataSegment in project druid by druid-io.

the class DataSegmentTest method testIdentifier.

@Test
public void testIdentifier() {
    final DataSegment segment = DataSegment.builder().dataSource("foo").interval(new Interval("2012-01-01/2012-01-02")).version(new DateTime("2012-01-01T11:22:33.444Z").toString()).shardSpec(NoneShardSpec.instance()).build();
    Assert.assertEquals("foo_2012-01-01T00:00:00.000Z_2012-01-02T00:00:00.000Z_2012-01-01T11:22:33.444Z", segment.getIdentifier());
}
Also used : DataSegment(io.druid.timeline.DataSegment) DateTime(org.joda.time.DateTime) Interval(org.joda.time.Interval) Test(org.junit.Test)

Example 44 with DataSegment

use of io.druid.timeline.DataSegment in project druid by druid-io.

the class DirectDruidClientTest method testCancel.

@Test
public void testCancel() throws Exception {
    HttpClient httpClient = EasyMock.createStrictMock(HttpClient.class);
    Capture<Request> capturedRequest = EasyMock.newCapture();
    ListenableFuture<Object> cancelledFuture = Futures.immediateCancelledFuture();
    SettableFuture<Object> cancellationFuture = SettableFuture.create();
    EasyMock.expect(httpClient.go(EasyMock.capture(capturedRequest), EasyMock.<HttpResponseHandler>anyObject())).andReturn(cancelledFuture).once();
    EasyMock.expect(httpClient.go(EasyMock.capture(capturedRequest), EasyMock.<HttpResponseHandler>anyObject())).andReturn(cancellationFuture).once();
    EasyMock.replay(httpClient);
    final ServerSelector serverSelector = new ServerSelector(new DataSegment("test", new Interval("2013-01-01/2013-01-02"), new DateTime("2013-01-01").toString(), Maps.<String, Object>newHashMap(), Lists.<String>newArrayList(), Lists.<String>newArrayList(), NoneShardSpec.instance(), 0, 0L), new HighestPriorityTierSelectorStrategy(new ConnectionCountServerSelectorStrategy()));
    DirectDruidClient client1 = new DirectDruidClient(new ReflectionQueryToolChestWarehouse(), QueryRunnerTestHelper.NOOP_QUERYWATCHER, new DefaultObjectMapper(), httpClient, "foo", new NoopServiceEmitter());
    QueryableDruidServer queryableDruidServer1 = new QueryableDruidServer(new DruidServer("test1", "localhost", 0, "historical", DruidServer.DEFAULT_TIER, 0), client1);
    serverSelector.addServerAndUpdateSegment(queryableDruidServer1, serverSelector.getSegment());
    TimeBoundaryQuery query = Druids.newTimeBoundaryQueryBuilder().dataSource("test").build();
    HashMap<String, List> context = Maps.newHashMap();
    cancellationFuture.set(new StatusResponseHolder(HttpResponseStatus.OK, new StringBuilder("cancelled")));
    Sequence results = client1.run(query, context);
    Assert.assertEquals(HttpMethod.DELETE, capturedRequest.getValue().getMethod());
    Assert.assertEquals(0, client1.getNumOpenConnections());
    QueryInterruptedException exception = null;
    try {
        Sequences.toList(results, Lists.newArrayList());
    } catch (QueryInterruptedException e) {
        exception = e;
    }
    Assert.assertNotNull(exception);
    EasyMock.verify(httpClient);
}
Also used : TimeBoundaryQuery(io.druid.query.timeboundary.TimeBoundaryQuery) DataSegment(io.druid.timeline.DataSegment) DateTime(org.joda.time.DateTime) QueryableDruidServer(io.druid.client.selector.QueryableDruidServer) ServerSelector(io.druid.client.selector.ServerSelector) HighestPriorityTierSelectorStrategy(io.druid.client.selector.HighestPriorityTierSelectorStrategy) StatusResponseHolder(com.metamx.http.client.response.StatusResponseHolder) List(java.util.List) QueryInterruptedException(io.druid.query.QueryInterruptedException) ConnectionCountServerSelectorStrategy(io.druid.client.selector.ConnectionCountServerSelectorStrategy) Request(com.metamx.http.client.Request) QueryableDruidServer(io.druid.client.selector.QueryableDruidServer) NoopServiceEmitter(io.druid.server.metrics.NoopServiceEmitter) Sequence(io.druid.java.util.common.guava.Sequence) HttpClient(com.metamx.http.client.HttpClient) DefaultObjectMapper(io.druid.jackson.DefaultObjectMapper) HttpResponseHandler(com.metamx.http.client.response.HttpResponseHandler) ReflectionQueryToolChestWarehouse(io.druid.query.ReflectionQueryToolChestWarehouse) Interval(org.joda.time.Interval) Test(org.junit.Test)

Example 45 with DataSegment

use of io.druid.timeline.DataSegment in project druid by druid-io.

the class BatchServerInventoryViewTest method setUp.

@Before
public void setUp() throws Exception {
    testingCluster = new TestingCluster(1);
    testingCluster.start();
    cf = CuratorFrameworkFactory.builder().connectString(testingCluster.getConnectString()).retryPolicy(new ExponentialBackoffRetry(1, 10)).compressionProvider(new PotentiallyGzippedCompressionProvider(true)).build();
    cf.start();
    cf.blockUntilConnected();
    cf.create().creatingParentsIfNeeded().forPath(testBasePath);
    jsonMapper = new DefaultObjectMapper();
    announcer = new Announcer(cf, MoreExecutors.sameThreadExecutor());
    announcer.start();
    segmentAnnouncer = new BatchDataSegmentAnnouncer(new DruidServerMetadata("id", "host", Long.MAX_VALUE, "type", "tier", 0), new BatchDataSegmentAnnouncerConfig() {

        @Override
        public int getSegmentsPerNode() {
            return 50;
        }
    }, new ZkPathsConfig() {

        @Override
        public String getBase() {
            return testBasePath;
        }
    }, announcer, jsonMapper);
    segmentAnnouncer.start();
    testSegments = Sets.newConcurrentHashSet();
    for (int i = 0; i < INITIAL_SEGMENTS; i++) {
        testSegments.add(makeSegment(i));
    }
    batchServerInventoryView = new BatchServerInventoryView(new ZkPathsConfig() {

        @Override
        public String getBase() {
            return testBasePath;
        }
    }, cf, jsonMapper, Predicates.<Pair<DruidServerMetadata, DataSegment>>alwaysTrue());
    batchServerInventoryView.start();
    inventoryUpdateCounter.set(0);
    filteredBatchServerInventoryView = new BatchServerInventoryView(new ZkPathsConfig() {

        @Override
        public String getBase() {
            return testBasePath;
        }
    }, cf, jsonMapper, new Predicate<Pair<DruidServerMetadata, DataSegment>>() {

        @Override
        public boolean apply(@Nullable Pair<DruidServerMetadata, DataSegment> input) {
            return input.rhs.getInterval().getStart().isBefore(SEGMENT_INTERVAL_START.plusDays(INITIAL_SEGMENTS));
        }
    }) {

        @Override
        protected DruidServer addInnerInventory(DruidServer container, String inventoryKey, Set<DataSegment> inventory) {
            DruidServer server = super.addInnerInventory(container, inventoryKey, inventory);
            inventoryUpdateCounter.incrementAndGet();
            return server;
        }
    };
    filteredBatchServerInventoryView.start();
}
Also used : BatchServerInventoryView(io.druid.client.BatchServerInventoryView) ExponentialBackoffRetry(org.apache.curator.retry.ExponentialBackoffRetry) BatchDataSegmentAnnouncerConfig(io.druid.server.initialization.BatchDataSegmentAnnouncerConfig) DruidServer(io.druid.client.DruidServer) DruidServerMetadata(io.druid.server.coordination.DruidServerMetadata) PotentiallyGzippedCompressionProvider(io.druid.curator.PotentiallyGzippedCompressionProvider) DataSegment(io.druid.timeline.DataSegment) Predicate(com.google.common.base.Predicate) TestingCluster(org.apache.curator.test.TestingCluster) BatchDataSegmentAnnouncer(io.druid.server.coordination.BatchDataSegmentAnnouncer) Announcer(io.druid.curator.announcement.Announcer) ZkPathsConfig(io.druid.server.initialization.ZkPathsConfig) DefaultObjectMapper(io.druid.jackson.DefaultObjectMapper) BatchDataSegmentAnnouncer(io.druid.server.coordination.BatchDataSegmentAnnouncer) Nullable(javax.annotation.Nullable) Pair(io.druid.java.util.common.Pair) Before(org.junit.Before)

Aggregations

DataSegment (io.druid.timeline.DataSegment)296 Test (org.junit.Test)154 Interval (org.joda.time.Interval)136 File (java.io.File)56 DateTime (org.joda.time.DateTime)53 IOException (java.io.IOException)37 DruidServer (io.druid.client.DruidServer)36 Map (java.util.Map)35 ImmutableMap (com.google.common.collect.ImmutableMap)20 DruidDataSource (io.druid.client.DruidDataSource)19 ListeningExecutorService (com.google.common.util.concurrent.ListeningExecutorService)18 List (java.util.List)18 DefaultObjectMapper (io.druid.jackson.DefaultObjectMapper)16 Rule (io.druid.server.coordinator.rules.Rule)16 ForeverLoadRule (io.druid.server.coordinator.rules.ForeverLoadRule)14 IntervalDropRule (io.druid.server.coordinator.rules.IntervalDropRule)13 IntervalLoadRule (io.druid.server.coordinator.rules.IntervalLoadRule)13 GET (javax.ws.rs.GET)13 Produces (javax.ws.rs.Produces)13 Path (org.apache.hadoop.fs.Path)13