Search in sources :

Example 6 with CalciteConnectionConfigImpl

use of org.apache.calcite.config.CalciteConnectionConfigImpl in project hive by apache.

the class CalcitePlanner method createPlanner.

private static RelOptPlanner createPlanner(HiveConf conf, Set<RelNode> corrScalarRexSQWithAgg, StatsSource statsSource, boolean isExplainPlan) {
    final Double maxSplitSize = (double) HiveConf.getLongVar(conf, HiveConf.ConfVars.MAPREDMAXSPLITSIZE);
    final Double maxMemory = (double) HiveConf.getLongVar(conf, HiveConf.ConfVars.HIVECONVERTJOINNOCONDITIONALTASKTHRESHOLD);
    HiveAlgorithmsConf algorithmsConf = new HiveAlgorithmsConf(maxSplitSize, maxMemory);
    HiveRulesRegistry registry = new HiveRulesRegistry();
    Properties calciteConfigProperties = new Properties();
    calciteConfigProperties.setProperty(CalciteConnectionProperty.TIME_ZONE.camelName(), conf.getLocalTimeZone().getId());
    calciteConfigProperties.setProperty(CalciteConnectionProperty.MATERIALIZATIONS_ENABLED.camelName(), Boolean.FALSE.toString());
    CalciteConnectionConfig calciteConfig = new CalciteConnectionConfigImpl(calciteConfigProperties);
    boolean isCorrelatedColumns = HiveConf.getBoolVar(conf, HiveConf.ConfVars.HIVE_CBO_STATS_CORRELATED_MULTI_KEY_JOINS);
    boolean heuristicMaterializationStrategy = HiveConf.getVar(conf, HiveConf.ConfVars.HIVE_MATERIALIZED_VIEW_REWRITING_SELECTION_STRATEGY).equals("heuristic");
    HivePlannerContext confContext = new HivePlannerContext(algorithmsConf, registry, calciteConfig, corrScalarRexSQWithAgg, new HiveConfPlannerContext(isCorrelatedColumns, heuristicMaterializationStrategy, isExplainPlan), statsSource);
    RelOptPlanner planner = HiveVolcanoPlanner.createPlanner(confContext);
    planner.addListener(new RuleEventLogger());
    return planner;
}
Also used : CalciteConnectionConfigImpl(org.apache.calcite.config.CalciteConnectionConfigImpl) HivePlannerContext(org.apache.hadoop.hive.ql.optimizer.calcite.HivePlannerContext) HiveRulesRegistry(org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRulesRegistry) CalciteConnectionConfig(org.apache.calcite.config.CalciteConnectionConfig) HiveConfPlannerContext(org.apache.hadoop.hive.ql.optimizer.calcite.HiveConfPlannerContext) QueryProperties(org.apache.hadoop.hive.ql.QueryProperties) Properties(java.util.Properties) RelOptPlanner(org.apache.calcite.plan.RelOptPlanner) HiveAlgorithmsConf(org.apache.hadoop.hive.ql.optimizer.calcite.cost.HiveAlgorithmsConf) RuleEventLogger(org.apache.hadoop.hive.ql.optimizer.calcite.RuleEventLogger)

Example 7 with CalciteConnectionConfigImpl

use of org.apache.calcite.config.CalciteConnectionConfigImpl in project beam by apache.

the class TableScanConverter method convert.

@Override
public RelNode convert(ResolvedTableScan zetaNode, List<RelNode> inputs) {
    List<String> tablePath = getTablePath(zetaNode.getTable());
    SchemaPlus defaultSchemaPlus = getConfig().getDefaultSchema();
    if (defaultSchemaPlus == null) {
        throw new AssertionError("Default schema is null.");
    }
    // TODO: reject incorrect top-level schema
    Table calciteTable = TableResolution.resolveCalciteTable(defaultSchemaPlus, tablePath);
    // we already resolved the table before passing the query to Analyzer, so it should be there
    checkNotNull(calciteTable, "Unable to resolve the table path %s in schema %s", tablePath, defaultSchemaPlus.getName());
    String defaultSchemaName = defaultSchemaPlus.getName();
    final CalciteCatalogReader catalogReader = new CalciteCatalogReader(CalciteSchema.from(defaultSchemaPlus), ImmutableList.of(defaultSchemaName), getCluster().getTypeFactory(), new CalciteConnectionConfigImpl(new Properties()));
    RelOptTableImpl relOptTable = RelOptTableImpl.create(catalogReader, calciteTable.getRowType(getCluster().getTypeFactory()), calciteTable, ImmutableList.<String>builder().add(defaultSchemaName).addAll(tablePath).build());
    if (calciteTable instanceof TranslatableTable) {
        return ((TranslatableTable) calciteTable).toRel(createToRelContext(), relOptTable);
    } else {
        throw new UnsupportedOperationException("Does not support non TranslatableTable type table!");
    }
}
Also used : CalciteConnectionConfigImpl(org.apache.beam.vendor.calcite.v1_28_0.org.apache.calcite.config.CalciteConnectionConfigImpl) RelOptTable(org.apache.beam.vendor.calcite.v1_28_0.org.apache.calcite.plan.RelOptTable) Table(org.apache.beam.vendor.calcite.v1_28_0.org.apache.calcite.schema.Table) TranslatableTable(org.apache.beam.vendor.calcite.v1_28_0.org.apache.calcite.schema.TranslatableTable) CalciteCatalogReader(org.apache.beam.vendor.calcite.v1_28_0.org.apache.calcite.prepare.CalciteCatalogReader) SchemaPlus(org.apache.beam.vendor.calcite.v1_28_0.org.apache.calcite.schema.SchemaPlus) RelOptTableImpl(org.apache.beam.vendor.calcite.v1_28_0.org.apache.calcite.prepare.RelOptTableImpl) TranslatableTable(org.apache.beam.vendor.calcite.v1_28_0.org.apache.calcite.schema.TranslatableTable) Properties(java.util.Properties)

Example 8 with CalciteConnectionConfigImpl

use of org.apache.calcite.config.CalciteConnectionConfigImpl in project storm by apache.

the class StormSqlContext method buildFrameWorkConfig.

public FrameworkConfig buildFrameWorkConfig() {
    if (hasUdf) {
        List<SqlOperatorTable> sqlOperatorTables = new ArrayList<>();
        sqlOperatorTables.add(SqlStdOperatorTable.instance());
        sqlOperatorTables.add(new CalciteCatalogReader(CalciteSchema.from(schema), Collections.emptyList(), typeFactory, new CalciteConnectionConfigImpl(new Properties())));
        return Frameworks.newConfigBuilder().defaultSchema(schema).operatorTable(new ChainedSqlOperatorTable(sqlOperatorTables)).build();
    } else {
        return Frameworks.newConfigBuilder().defaultSchema(schema).build();
    }
}
Also used : CalciteConnectionConfigImpl(org.apache.calcite.config.CalciteConnectionConfigImpl) ChainedSqlOperatorTable(org.apache.calcite.sql.util.ChainedSqlOperatorTable) CalciteCatalogReader(org.apache.calcite.prepare.CalciteCatalogReader) SqlOperatorTable(org.apache.calcite.sql.SqlOperatorTable) ChainedSqlOperatorTable(org.apache.calcite.sql.util.ChainedSqlOperatorTable) ArrayList(java.util.ArrayList) Properties(java.util.Properties)

Example 9 with CalciteConnectionConfigImpl

use of org.apache.calcite.config.CalciteConnectionConfigImpl in project calcite by apache.

the class PlannerImpl method createCatalogReader.

// CalciteCatalogReader is stateless; no need to store one
private CalciteCatalogReader createCatalogReader() {
    final SchemaPlus rootSchema = rootSchema(defaultSchema);
    final Context context = config.getContext();
    final CalciteConnectionConfig connectionConfig;
    if (context != null) {
        connectionConfig = context.unwrap(CalciteConnectionConfig.class);
    } else {
        Properties properties = new Properties();
        properties.setProperty(CalciteConnectionProperty.CASE_SENSITIVE.camelName(), String.valueOf(parserConfig.caseSensitive()));
        connectionConfig = new CalciteConnectionConfigImpl(properties);
    }
    return new CalciteCatalogReader(CalciteSchema.from(rootSchema), CalciteSchema.from(defaultSchema).path(null), typeFactory, connectionConfig);
}
Also used : Context(org.apache.calcite.plan.Context) CalciteConnectionConfigImpl(org.apache.calcite.config.CalciteConnectionConfigImpl) CalciteConnectionConfig(org.apache.calcite.config.CalciteConnectionConfig) SchemaPlus(org.apache.calcite.schema.SchemaPlus) Properties(java.util.Properties)

Example 10 with CalciteConnectionConfigImpl

use of org.apache.calcite.config.CalciteConnectionConfigImpl in project calcite by apache.

the class RelOptRulesTest method testExtractYearMonthToRange.

@Test
public void testExtractYearMonthToRange() {
    final String sql = "select *\n" + "from sales.emp_b as e\n" + "where extract(year from birthdate) = 2014" + "and extract(month from birthdate) = 4";
    HepProgram program = new HepProgramBuilder().addRuleInstance(DateRangeRules.FILTER_INSTANCE).build();
    final Context context = Contexts.of(new CalciteConnectionConfigImpl(new Properties()));
    sql(sql).with(program).withContext(context).check();
}
Also used : Context(org.apache.calcite.plan.Context) CalciteConnectionConfigImpl(org.apache.calcite.config.CalciteConnectionConfigImpl) HepProgram(org.apache.calcite.plan.hep.HepProgram) HepProgramBuilder(org.apache.calcite.plan.hep.HepProgramBuilder) Properties(java.util.Properties) Test(org.junit.Test)

Aggregations

Properties (java.util.Properties)13 CalciteConnectionConfigImpl (org.apache.calcite.config.CalciteConnectionConfigImpl)12 CalciteConnectionConfig (org.apache.calcite.config.CalciteConnectionConfig)4 Context (org.apache.calcite.plan.Context)4 RelOptPlanner (org.apache.calcite.plan.RelOptPlanner)2 HepProgram (org.apache.calcite.plan.hep.HepProgram)2 HepProgramBuilder (org.apache.calcite.plan.hep.HepProgramBuilder)2 CalciteCatalogReader (org.apache.calcite.prepare.CalciteCatalogReader)2 SchemaPlus (org.apache.calcite.schema.SchemaPlus)2 QueryProperties (org.apache.hadoop.hive.ql.QueryProperties)2 HiveConfPlannerContext (org.apache.hadoop.hive.ql.optimizer.calcite.HiveConfPlannerContext)2 HivePlannerContext (org.apache.hadoop.hive.ql.optimizer.calcite.HivePlannerContext)2 HiveAlgorithmsConf (org.apache.hadoop.hive.ql.optimizer.calcite.cost.HiveAlgorithmsConf)2 HiveRulesRegistry (org.apache.hadoop.hive.ql.optimizer.calcite.rules.HiveRulesRegistry)2 Test (org.junit.Test)2 StatementExecutionException (herddb.model.StatementExecutionException)1 ArrayList (java.util.ArrayList)1 CalciteConnectionConfigImpl (org.apache.beam.vendor.calcite.v1_28_0.org.apache.calcite.config.CalciteConnectionConfigImpl)1 RelOptTable (org.apache.beam.vendor.calcite.v1_28_0.org.apache.calcite.plan.RelOptTable)1 CalciteCatalogReader (org.apache.beam.vendor.calcite.v1_28_0.org.apache.calcite.prepare.CalciteCatalogReader)1