Search in sources :

Example 41 with PartitionName

use of io.crate.metadata.PartitionName in project crate by crate.

the class WhereClauseAnalyzerTest method testMultipleColumnsOptimization.

@Test
public void testMultipleColumnsOptimization() throws Exception {
    WhereClause whereClause = analyzeSelectWhere("select * from generated_col where ts > '2015-01-01T12:00:00' and y = 1");
    assertThat(whereClause.partitions().size(), is(1));
    assertThat(whereClause.partitions().get(0), is(new PartitionName("generated_col", Arrays.asList(new BytesRef("1420070400000"), new BytesRef("-1"))).asIndexName()));
}
Also used : PartitionName(io.crate.metadata.PartitionName) WhereClause(io.crate.analyze.WhereClause) BytesRef(org.apache.lucene.util.BytesRef) Test(org.junit.Test) CrateUnitTest(io.crate.test.integration.CrateUnitTest)

Example 42 with PartitionName

use of io.crate.metadata.PartitionName in project crate by crate.

the class WhereClauseAnalyzerTest method testEqualGenColOptimization.

@Test
public void testEqualGenColOptimization() throws Exception {
    WhereClause whereClause = analyzeSelectWhere("select * from generated_col where y = 1");
    assertThat(whereClause.partitions().size(), is(1));
    assertThat(whereClause.partitions().get(0), is(new PartitionName("generated_col", Arrays.asList(new BytesRef("1420070400000"), new BytesRef("-1"))).asIndexName()));
}
Also used : PartitionName(io.crate.metadata.PartitionName) WhereClause(io.crate.analyze.WhereClause) BytesRef(org.apache.lucene.util.BytesRef) Test(org.junit.Test) CrateUnitTest(io.crate.test.integration.CrateUnitTest)

Example 43 with PartitionName

use of io.crate.metadata.PartitionName in project crate by crate.

the class WhereClauseAnalyzerTest method testGenColRangeOptimization.

@Test
public void testGenColRangeOptimization() throws Exception {
    WhereClause whereClause = analyzeSelectWhere("select * from generated_col where ts >= '2015-01-01T12:00:00' and ts <= '2015-01-02T00:00:00'");
    assertThat(whereClause.partitions().size(), is(2));
    assertThat(whereClause.partitions().get(0), is(new PartitionName("generated_col", Arrays.asList(new BytesRef("1420070400000"), new BytesRef("-1"))).asIndexName()));
    assertThat(whereClause.partitions().get(1), is(new PartitionName("generated_col", Arrays.asList(new BytesRef("1420156800000"), new BytesRef("-2"))).asIndexName()));
}
Also used : PartitionName(io.crate.metadata.PartitionName) WhereClause(io.crate.analyze.WhereClause) BytesRef(org.apache.lucene.util.BytesRef) Test(org.junit.Test) CrateUnitTest(io.crate.test.integration.CrateUnitTest)

Example 44 with PartitionName

use of io.crate.metadata.PartitionName in project crate by crate.

the class CopyAnalyzerTest method testCopyToDirectoryWithPartitionClause.

@Test
public void testCopyToDirectoryWithPartitionClause() throws Exception {
    CopyToAnalyzedStatement analysis = e.analyze("copy parted partition (date=1395874800000) to directory '/tmp'");
    String parted = new PartitionName("parted", Collections.singletonList(new BytesRef("1395874800000"))).asIndexName();
    QuerySpec querySpec = analysis.subQueryRelation().querySpec();
    assertThat(querySpec.where().partitions(), contains(parted));
    assertThat(analysis.overwrites().size(), is(0));
}
Also used : PartitionName(io.crate.metadata.PartitionName) BytesRef(org.apache.lucene.util.BytesRef) Test(org.junit.Test) CrateUnitTest(io.crate.test.integration.CrateUnitTest)

Example 45 with PartitionName

use of io.crate.metadata.PartitionName in project crate by crate.

the class PartitionedTableConcurrentIntegrationTest method testSelectWhileShardsAreRelocating.

/**
     * Test depends on 2 data nodes
     */
@Test
public void testSelectWhileShardsAreRelocating() throws Throwable {
    execute("create table t (name string, p string) " + "clustered into 2 shards " + "partitioned by (p) with (number_of_replicas = 0)");
    ensureYellow();
    execute("insert into t (name, p) values (?, ?)", new Object[][] { new Object[] { "Marvin", "a" }, new Object[] { "Trillian", "a" } });
    execute("refresh table t");
    execute("set global stats.enabled=true");
    final AtomicReference<Throwable> lastThrowable = new AtomicReference<>();
    final CountDownLatch selects = new CountDownLatch(100);
    Thread t = new Thread(new Runnable() {

        @Override
        public void run() {
            while (selects.getCount() > 0) {
                try {
                    execute("select * from t");
                } catch (Throwable t) {
                    // The failed job should have three started operations
                    SQLResponse res = execute("select id from sys.jobs_log where error is not null order by started desc limit 1");
                    if (res.rowCount() > 0) {
                        String id = (String) res.rows()[0][0];
                        res = execute("select count(*) from sys.operations_log where name=? or name = ?and job_id = ?", new Object[] { "collect", "fetchContext", id });
                        if ((long) res.rows()[0][0] < 3) {
                            // set the error if there where less than three attempts
                            lastThrowable.set(t);
                        }
                    }
                } finally {
                    selects.countDown();
                }
            }
        }
    });
    t.start();
    PartitionName partitionName = new PartitionName("t", Collections.singletonList(new BytesRef("a")));
    final String indexName = partitionName.asIndexName();
    ClusterService clusterService = internalCluster().getInstance(ClusterService.class);
    DiscoveryNodes nodes = clusterService.state().nodes();
    List<String> nodeIds = new ArrayList<>(2);
    for (DiscoveryNode node : nodes) {
        if (node.dataNode()) {
            nodeIds.add(node.getId());
        }
    }
    final Map<String, String> nodeSwap = new HashMap<>(2);
    nodeSwap.put(nodeIds.get(0), nodeIds.get(1));
    nodeSwap.put(nodeIds.get(1), nodeIds.get(0));
    final CountDownLatch relocations = new CountDownLatch(20);
    Thread relocatingThread = new Thread(new Runnable() {

        @Override
        public void run() {
            while (relocations.getCount() > 0) {
                ClusterStateResponse clusterStateResponse = admin().cluster().prepareState().setIndices(indexName).execute().actionGet();
                List<ShardRouting> shardRoutings = clusterStateResponse.getState().routingTable().allShards(indexName);
                ClusterRerouteRequestBuilder clusterRerouteRequestBuilder = admin().cluster().prepareReroute();
                int numMoves = 0;
                for (ShardRouting shardRouting : shardRoutings) {
                    if (shardRouting.currentNodeId() == null) {
                        continue;
                    }
                    if (shardRouting.state() != ShardRoutingState.STARTED) {
                        continue;
                    }
                    String toNode = nodeSwap.get(shardRouting.currentNodeId());
                    clusterRerouteRequestBuilder.add(new MoveAllocationCommand(shardRouting.shardId(), shardRouting.currentNodeId(), toNode));
                    numMoves++;
                }
                if (numMoves > 0) {
                    clusterRerouteRequestBuilder.execute().actionGet();
                    client().admin().cluster().prepareHealth().setWaitForEvents(Priority.LANGUID).setWaitForRelocatingShards(0).setTimeout(ACCEPTABLE_RELOCATION_TIME).execute().actionGet();
                    relocations.countDown();
                }
            }
        }
    });
    relocatingThread.start();
    relocations.await(SQLTransportExecutor.REQUEST_TIMEOUT.getSeconds() + 1, TimeUnit.SECONDS);
    selects.await(SQLTransportExecutor.REQUEST_TIMEOUT.getSeconds() + 1, TimeUnit.SECONDS);
    Throwable throwable = lastThrowable.get();
    if (throwable != null) {
        throw throwable;
    }
    t.join();
    relocatingThread.join();
}
Also used : DiscoveryNode(org.elasticsearch.cluster.node.DiscoveryNode) ClusterRerouteRequestBuilder(org.elasticsearch.action.admin.cluster.reroute.ClusterRerouteRequestBuilder) ClusterStateResponse(org.elasticsearch.action.admin.cluster.state.ClusterStateResponse) MoveAllocationCommand(org.elasticsearch.cluster.routing.allocation.command.MoveAllocationCommand) AtomicReference(java.util.concurrent.atomic.AtomicReference) SQLResponse(io.crate.testing.SQLResponse) CountDownLatch(java.util.concurrent.CountDownLatch) PartitionName(io.crate.metadata.PartitionName) ClusterService(org.elasticsearch.cluster.ClusterService) ShardRouting(org.elasticsearch.cluster.routing.ShardRouting) BytesRef(org.apache.lucene.util.BytesRef) DiscoveryNodes(org.elasticsearch.cluster.node.DiscoveryNodes) Test(org.junit.Test)

Aggregations

PartitionName (io.crate.metadata.PartitionName)58 BytesRef (org.apache.lucene.util.BytesRef)50 Test (org.junit.Test)44 CrateUnitTest (io.crate.test.integration.CrateUnitTest)20 GetIndexTemplatesResponse (org.elasticsearch.action.admin.indices.template.get.GetIndexTemplatesResponse)10 WhereClause (io.crate.analyze.WhereClause)8 TableIdent (io.crate.metadata.TableIdent)8 MappingMetaData (org.elasticsearch.cluster.metadata.MappingMetaData)7 Settings (org.elasticsearch.common.settings.Settings)7 DocTableInfo (io.crate.metadata.doc.DocTableInfo)6 GetSettingsResponse (org.elasticsearch.action.admin.indices.settings.get.GetSettingsResponse)5 IndexTemplateMetaData (org.elasticsearch.cluster.metadata.IndexTemplateMetaData)5 SQLTransportIntegrationTest (io.crate.integrationtests.SQLTransportIntegrationTest)4 SQLResponse (io.crate.testing.SQLResponse)4 Map (java.util.Map)4 Table (io.crate.sql.tree.Table)3 ArrayList (java.util.ArrayList)3 HashMap (java.util.HashMap)3 IndexMetaData (org.elasticsearch.cluster.metadata.IndexMetaData)3 MetaData (org.elasticsearch.cluster.metadata.MetaData)3