Search in sources :

Example 21 with QueryOperation

use of org.apache.flink.table.operations.QueryOperation in project flink by apache.

the class TableEnvironmentImpl method from.

@Override
public Table from(TableDescriptor descriptor) {
    Preconditions.checkNotNull(descriptor, "Table descriptor must not be null.");
    final ResolvedCatalogTable resolvedCatalogBaseTable = catalogManager.resolveCatalogTable(descriptor.toCatalogTable());
    final QueryOperation queryOperation = new SourceQueryOperation(ContextResolvedTable.anonymous(resolvedCatalogBaseTable));
    return createTable(queryOperation);
}
Also used : ResolvedCatalogTable(org.apache.flink.table.catalog.ResolvedCatalogTable) TableSourceQueryOperation(org.apache.flink.table.operations.TableSourceQueryOperation) SourceQueryOperation(org.apache.flink.table.operations.SourceQueryOperation) QueryOperation(org.apache.flink.table.operations.QueryOperation) TableSourceQueryOperation(org.apache.flink.table.operations.TableSourceQueryOperation) SourceQueryOperation(org.apache.flink.table.operations.SourceQueryOperation)

Example 22 with QueryOperation

use of org.apache.flink.table.operations.QueryOperation in project flink by apache.

the class InConverter method convert.

@Override
public RexNode convert(CallExpression call, CallExpressionConvertRule.ConvertContext context) {
    checkArgument(call, call.getChildren().size() > 1);
    Expression headExpr = call.getChildren().get(1);
    if (headExpr instanceof TableReferenceExpression) {
        QueryOperation tableOperation = ((TableReferenceExpression) headExpr).getQueryOperation();
        RexNode child = context.toRexNode(call.getChildren().get(0));
        return RexSubQuery.in(((FlinkRelBuilder) context.getRelBuilder()).queryOperation(tableOperation).build(), ImmutableList.of(child));
    } else {
        List<RexNode> child = toRexNodes(context, call.getChildren());
        return context.getRelBuilder().getRexBuilder().makeIn(child.get(0), child.subList(1, child.size()));
    }
}
Also used : CallExpression(org.apache.flink.table.expressions.CallExpression) Expression(org.apache.flink.table.expressions.Expression) TableReferenceExpression(org.apache.flink.table.expressions.TableReferenceExpression) FlinkRelBuilder(org.apache.flink.table.planner.calcite.FlinkRelBuilder) TableReferenceExpression(org.apache.flink.table.expressions.TableReferenceExpression) QueryOperation(org.apache.flink.table.operations.QueryOperation) RexNode(org.apache.calcite.rex.RexNode)

Example 23 with QueryOperation

use of org.apache.flink.table.operations.QueryOperation in project flink by apache.

the class SqlToOperationConverter method convertAlterTableCompact.

/**
 * Convert `ALTER TABLE ... COMPACT` operation to {@link ModifyOperation} for Flink's managed
 * table to trigger a compaction batch job.
 */
private ModifyOperation convertAlterTableCompact(ObjectIdentifier tableIdentifier, ContextResolvedTable contextResolvedTable, SqlAlterTableCompact alterTableCompact) {
    Catalog catalog = catalogManager.getCatalog(tableIdentifier.getCatalogName()).orElse(null);
    ResolvedCatalogTable resolvedCatalogTable = contextResolvedTable.getResolvedTable();
    if (ManagedTableListener.isManagedTable(catalog, resolvedCatalogTable)) {
        Map<String, String> partitionKVs = alterTableCompact.getPartitionKVs();
        CatalogPartitionSpec partitionSpec = new CatalogPartitionSpec(Collections.emptyMap());
        if (partitionKVs != null) {
            List<String> partitionKeys = resolvedCatalogTable.getPartitionKeys();
            Set<String> validPartitionKeySet = new HashSet<>(partitionKeys);
            String exMsg = partitionKeys.isEmpty() ? String.format("Table %s is not partitioned.", tableIdentifier) : String.format("Available ordered partition columns: [%s]", partitionKeys.stream().collect(Collectors.joining("', '", "'", "'")));
            partitionKVs.forEach((partitionKey, partitionValue) -> {
                if (!validPartitionKeySet.contains(partitionKey)) {
                    throw new ValidationException(String.format("Partition column '%s' not defined in the table schema. %s", partitionKey, exMsg));
                }
            });
            partitionSpec = new CatalogPartitionSpec(partitionKVs);
        }
        Map<String, String> compactOptions = catalogManager.resolveCompactManagedTableOptions(resolvedCatalogTable, tableIdentifier, partitionSpec);
        QueryOperation child = new SourceQueryOperation(contextResolvedTable, compactOptions);
        return new SinkModifyOperation(contextResolvedTable, child, partitionSpec.getPartitionSpec(), false, compactOptions);
    }
    throw new ValidationException(String.format("ALTER TABLE COMPACT operation is not supported for non-managed table %s", tableIdentifier));
}
Also used : ValidationException(org.apache.flink.table.api.ValidationException) ResolvedCatalogTable(org.apache.flink.table.catalog.ResolvedCatalogTable) SinkModifyOperation(org.apache.flink.table.operations.SinkModifyOperation) SourceQueryOperation(org.apache.flink.table.operations.SourceQueryOperation) SqlShowCurrentCatalog(org.apache.flink.sql.parser.dql.SqlShowCurrentCatalog) Catalog(org.apache.flink.table.catalog.Catalog) SqlUseCatalog(org.apache.flink.sql.parser.ddl.SqlUseCatalog) SqlDropCatalog(org.apache.flink.sql.parser.ddl.SqlDropCatalog) SqlCreateCatalog(org.apache.flink.sql.parser.ddl.SqlCreateCatalog) CatalogPartitionSpec(org.apache.flink.table.catalog.CatalogPartitionSpec) HashSet(java.util.HashSet) QueryOperation(org.apache.flink.table.operations.QueryOperation) SourceQueryOperation(org.apache.flink.table.operations.SourceQueryOperation)

Aggregations

QueryOperation (org.apache.flink.table.operations.QueryOperation)23 ValidationException (org.apache.flink.table.api.ValidationException)10 Expression (org.apache.flink.table.expressions.Expression)10 ArrayList (java.util.ArrayList)8 TableException (org.apache.flink.table.api.TableException)7 List (java.util.List)6 ResolvedExpression (org.apache.flink.table.expressions.ResolvedExpression)6 ExpressionResolver (org.apache.flink.table.expressions.resolver.ExpressionResolver)6 Operation (org.apache.flink.table.operations.Operation)6 ValuesQueryOperation (org.apache.flink.table.operations.ValuesQueryOperation)6 CreateTableOperation (org.apache.flink.table.operations.ddl.CreateTableOperation)6 ObjectIdentifier (org.apache.flink.table.catalog.ObjectIdentifier)5 SqlExpressionResolver (org.apache.flink.table.expressions.resolver.SqlExpressionResolver)5 DistinctQueryOperation (org.apache.flink.table.operations.DistinctQueryOperation)5 FilterQueryOperation (org.apache.flink.table.operations.FilterQueryOperation)5 IOException (java.io.IOException)4 ResolvedCatalogTable (org.apache.flink.table.catalog.ResolvedCatalogTable)4 UnresolvedCallExpression (org.apache.flink.table.expressions.UnresolvedCallExpression)4 DescribeTableOperation (org.apache.flink.table.operations.DescribeTableOperation)4 ExplainOperation (org.apache.flink.table.operations.ExplainOperation)4