Search in sources :

Example 1 with SupportsPartitionPushDown

use of org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown in project flink by apache.

the class PushPartitionIntoTableSourceScanRule method matches.

@Override
public boolean matches(RelOptRuleCall call) {
    Filter filter = call.rel(0);
    if (filter.getCondition() == null) {
        return false;
    }
    TableSourceTable tableSourceTable = call.rel(1).getTable().unwrap(TableSourceTable.class);
    if (tableSourceTable == null) {
        return false;
    }
    DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
    if (!(dynamicTableSource instanceof SupportsPartitionPushDown)) {
        return false;
    }
    CatalogTable catalogTable = tableSourceTable.contextResolvedTable().getTable();
    if (!catalogTable.isPartitioned() || catalogTable.getPartitionKeys().isEmpty()) {
        return false;
    }
    return Arrays.stream(tableSourceTable.abilitySpecs()).noneMatch(spec -> spec instanceof PartitionPushDownSpec);
}
Also used : PartitionPushDownSpec(org.apache.flink.table.planner.plan.abilities.source.PartitionPushDownSpec) Filter(org.apache.calcite.rel.core.Filter) TableSourceTable(org.apache.flink.table.planner.plan.schema.TableSourceTable) CatalogTable(org.apache.flink.table.catalog.CatalogTable) ResolvedCatalogTable(org.apache.flink.table.catalog.ResolvedCatalogTable) DynamicTableSource(org.apache.flink.table.connector.source.DynamicTableSource) SupportsPartitionPushDown(org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown)

Example 2 with SupportsPartitionPushDown

use of org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown in project flink by apache.

the class PushPartitionIntoTableSourceScanRule method readPartitionsAndPrune.

private List<Map<String, String>> readPartitionsAndPrune(RexBuilder rexBuilder, FlinkContext context, TableSourceTable tableSourceTable, Function<List<Map<String, String>>, List<Map<String, String>>> pruner, Seq<RexNode> partitionPredicate, List<String> inputFieldNames) {
    // get partitions from table/catalog and prune
    Optional<Catalog> catalogOptional = tableSourceTable.contextResolvedTable().getCatalog();
    DynamicTableSource dynamicTableSource = tableSourceTable.tableSource();
    Optional<List<Map<String, String>>> optionalPartitions = ((SupportsPartitionPushDown) dynamicTableSource).listPartitions();
    if (optionalPartitions.isPresent()) {
        return pruner.apply(optionalPartitions.get());
    } else {
        // we will read partitions from catalog if table doesn't support listPartitions.
        if (!catalogOptional.isPresent()) {
            throw new TableException(String.format("Table '%s' connector doesn't provide partitions, and it cannot be loaded from the catalog", tableSourceTable.contextResolvedTable().getIdentifier().asSummaryString()));
        }
        try {
            return readPartitionFromCatalogAndPrune(rexBuilder, context, catalogOptional.get(), tableSourceTable.contextResolvedTable().getIdentifier(), inputFieldNames, partitionPredicate, pruner);
        } catch (TableNotExistException tableNotExistException) {
            throw new TableException(String.format("Table %s is not found in catalog.", tableSourceTable.contextResolvedTable().getIdentifier().asSummaryString()));
        } catch (TableNotPartitionedException tableNotPartitionedException) {
            throw new TableException(String.format("Table %s is not a partitionable source. Validator should have checked it.", tableSourceTable.contextResolvedTable().getIdentifier().asSummaryString()), tableNotPartitionedException);
        }
    }
}
Also used : TableException(org.apache.flink.table.api.TableException) TableNotPartitionedException(org.apache.flink.table.catalog.exceptions.TableNotPartitionedException) TableNotExistException(org.apache.flink.table.catalog.exceptions.TableNotExistException) List(java.util.List) ArrayList(java.util.ArrayList) Catalog(org.apache.flink.table.catalog.Catalog) DynamicTableSource(org.apache.flink.table.connector.source.DynamicTableSource) SupportsPartitionPushDown(org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown)

Aggregations

DynamicTableSource (org.apache.flink.table.connector.source.DynamicTableSource)2 SupportsPartitionPushDown (org.apache.flink.table.connector.source.abilities.SupportsPartitionPushDown)2 ArrayList (java.util.ArrayList)1 List (java.util.List)1 Filter (org.apache.calcite.rel.core.Filter)1 TableException (org.apache.flink.table.api.TableException)1 Catalog (org.apache.flink.table.catalog.Catalog)1 CatalogTable (org.apache.flink.table.catalog.CatalogTable)1 ResolvedCatalogTable (org.apache.flink.table.catalog.ResolvedCatalogTable)1 TableNotExistException (org.apache.flink.table.catalog.exceptions.TableNotExistException)1 TableNotPartitionedException (org.apache.flink.table.catalog.exceptions.TableNotPartitionedException)1 PartitionPushDownSpec (org.apache.flink.table.planner.plan.abilities.source.PartitionPushDownSpec)1 TableSourceTable (org.apache.flink.table.planner.plan.schema.TableSourceTable)1