Search in sources :

Example 1 with BatchTableTestUtil

use of org.apache.flink.table.planner.utils.BatchTableTestUtil in project flink by apache.

the class PushFilterIntoTableSourceScanRuleTest method setup.

@Before
public void setup() {
    util = batchTestUtil(TableConfig.getDefault());
    ((BatchTableTestUtil) util).buildBatchProgram(FlinkBatchProgram.DEFAULT_REWRITE());
    CalciteConfig calciteConfig = TableConfigUtils.getCalciteConfig(util.tableEnv().getConfig());
    calciteConfig.getBatchProgram().get().addLast("rules", FlinkHepRuleSetProgramBuilder.<BatchOptimizeContext>newBuilder().setHepRulesExecutionType(HEP_RULES_EXECUTION_TYPE.RULE_COLLECTION()).setHepMatchOrder(HepMatchOrder.BOTTOM_UP).add(RuleSets.ofList(PushFilterIntoTableSourceScanRule.INSTANCE, CoreRules.FILTER_PROJECT_TRANSPOSE)).build());
    // name: STRING, id: LONG, amount: INT, price: DOUBLE
    String ddl1 = "CREATE TABLE MyTable (\n" + "  name STRING,\n" + "  id bigint,\n" + "  amount int,\n" + "  price double\n" + ") WITH (\n" + " 'connector' = 'values',\n" + " 'filterable-fields' = 'amount',\n" + " 'bounded' = 'true'\n" + ")";
    util.tableEnv().executeSql(ddl1);
    String ddl2 = "CREATE TABLE VirtualTable (\n" + "  name STRING,\n" + "  id bigint,\n" + "  amount int,\n" + "  virtualField as amount + 1,\n" + "  price double\n" + ") WITH (\n" + " 'connector' = 'values',\n" + " 'filterable-fields' = 'amount',\n" + " 'bounded' = 'true'\n" + ")";
    util.tableEnv().executeSql(ddl2);
}
Also used : BatchTableTestUtil(org.apache.flink.table.planner.utils.BatchTableTestUtil) CalciteConfig(org.apache.flink.table.planner.calcite.CalciteConfig) Before(org.junit.Before)

Example 2 with BatchTableTestUtil

use of org.apache.flink.table.planner.utils.BatchTableTestUtil in project flink by apache.

the class BatchOperatorNameTest method testLegacySourceSink.

@Test
public void testLegacySourceSink() {
    TableSchema schema = TestLegacyFilterableTableSource.defaultSchema();
    TestLegacyFilterableTableSource.createTemporaryTable(tEnv, schema, "MySource", true, TestLegacyFilterableTableSource.defaultRows().toList(), TestLegacyFilterableTableSource.defaultFilterableFields());
    TableSink<Row> sink = ((BatchTableTestUtil) util).createCollectTableSink(schema.getFieldNames(), schema.getTableColumns().stream().map(col -> col.getType().getLogicalType()).toArray(LogicalType[]::new));
    util.testingTableEnv().registerTableSinkInternal("MySink", sink);
    verifyInsert("insert into MySink select * from MySource");
}
Also used : TableSchema(org.apache.flink.table.api.TableSchema) BatchTableTestUtil(org.apache.flink.table.planner.utils.BatchTableTestUtil) LogicalType(org.apache.flink.table.types.logical.LogicalType) Row(org.apache.flink.types.Row) Test(org.junit.Test)

Aggregations

BatchTableTestUtil (org.apache.flink.table.planner.utils.BatchTableTestUtil)2 TableSchema (org.apache.flink.table.api.TableSchema)1 CalciteConfig (org.apache.flink.table.planner.calcite.CalciteConfig)1 LogicalType (org.apache.flink.table.types.logical.LogicalType)1 Row (org.apache.flink.types.Row)1 Before (org.junit.Before)1 Test (org.junit.Test)1