Search in sources :

Example 26 with TableException

use of org.apache.flink.table.api.TableException in project flink by apache.

the class DeadlockBreakupProcessor method process.

@Override
public ExecNodeGraph process(ExecNodeGraph execGraph, ProcessorContext context) {
    if (!execGraph.getRootNodes().stream().allMatch(r -> r instanceof BatchExecNode)) {
        throw new TableException("Only BatchExecNode DAG are supported now.");
    }
    InputPriorityConflictResolver resolver = new InputPriorityConflictResolver(execGraph.getRootNodes(), InputProperty.DamBehavior.END_INPUT, StreamExchangeMode.BATCH, context.getPlanner().getConfiguration());
    resolver.detectAndResolve();
    return execGraph;
}
Also used : BatchExecNode(org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecNode) StreamExchangeMode(org.apache.flink.streaming.api.transformations.StreamExchangeMode) InputPriorityConflictResolver(org.apache.flink.table.planner.plan.nodes.exec.processor.utils.InputPriorityConflictResolver) InputProperty(org.apache.flink.table.planner.plan.nodes.exec.InputProperty) TableException(org.apache.flink.table.api.TableException) ExecNodeGraph(org.apache.flink.table.planner.plan.nodes.exec.ExecNodeGraph) TableException(org.apache.flink.table.api.TableException) BatchExecNode(org.apache.flink.table.planner.plan.nodes.exec.batch.BatchExecNode) InputPriorityConflictResolver(org.apache.flink.table.planner.plan.nodes.exec.processor.utils.InputPriorityConflictResolver)

Example 27 with TableException

use of org.apache.flink.table.api.TableException in project flink by apache.

the class BatchExecPythonGroupWindowAggregate method createPythonOneInputTransformation.

private OneInputTransformation<RowData, RowData> createPythonOneInputTransformation(Transformation<RowData> inputTransform, RowType inputRowType, RowType outputRowType, int maxLimitSize, long windowSize, long slideSize, Configuration pythonConfig, ExecNodeConfig config) {
    int[] namePropertyTypeArray = Arrays.stream(namedWindowProperties).mapToInt(p -> {
        WindowProperty property = p.getProperty();
        if (property instanceof WindowStart) {
            return 0;
        }
        if (property instanceof WindowEnd) {
            return 1;
        }
        if (property instanceof RowtimeAttribute) {
            return 2;
        }
        throw new TableException("Unexpected property " + property);
    }).toArray();
    Tuple2<int[], PythonFunctionInfo[]> aggInfos = CommonPythonUtil.extractPythonAggregateFunctionInfosFromAggregateCall(aggCalls);
    int[] pythonUdafInputOffsets = aggInfos.f0;
    PythonFunctionInfo[] pythonFunctionInfos = aggInfos.f1;
    OneInputStreamOperator<RowData, RowData> pythonOperator = getPythonGroupWindowAggregateFunctionOperator(config, pythonConfig, inputRowType, outputRowType, maxLimitSize, windowSize, slideSize, namePropertyTypeArray, pythonUdafInputOffsets, pythonFunctionInfos);
    return ExecNodeUtil.createOneInputTransformation(inputTransform, createTransformationName(config), createTransformationDescription(config), pythonOperator, InternalTypeInfo.of(outputRowType), inputTransform.getParallelism());
}
Also used : Arrays(java.util.Arrays) InputProperty(org.apache.flink.table.planner.plan.nodes.exec.InputProperty) Tuple2(org.apache.flink.api.java.tuple.Tuple2) RowtimeAttribute(org.apache.flink.table.runtime.groupwindow.RowtimeAttribute) RowType(org.apache.flink.table.types.logical.RowType) Constructor(java.lang.reflect.Constructor) ExecNode(org.apache.flink.table.planner.plan.nodes.exec.ExecNode) ExecNodeUtil(org.apache.flink.table.planner.plan.nodes.exec.utils.ExecNodeUtil) WindowEnd(org.apache.flink.table.runtime.groupwindow.WindowEnd) ManagedMemoryUseCase(org.apache.flink.core.memory.ManagedMemoryUseCase) CodeGeneratorContext(org.apache.flink.table.planner.codegen.CodeGeneratorContext) Projection(org.apache.flink.table.connector.Projection) ProjectionCodeGenerator(org.apache.flink.table.planner.codegen.ProjectionCodeGenerator) WindowCodeGenerator(org.apache.flink.table.planner.codegen.agg.batch.WindowCodeGenerator) ExecNodeContext(org.apache.flink.table.planner.plan.nodes.exec.ExecNodeContext) WindowStart(org.apache.flink.table.runtime.groupwindow.WindowStart) RowData(org.apache.flink.table.data.RowData) PlannerBase(org.apache.flink.table.planner.delegation.PlannerBase) CommonPythonUtil(org.apache.flink.table.planner.plan.nodes.exec.utils.CommonPythonUtil) SingleTransformationTranslator(org.apache.flink.table.planner.plan.nodes.exec.SingleTransformationTranslator) ExecNodeConfig(org.apache.flink.table.planner.plan.nodes.exec.ExecNodeConfig) Configuration(org.apache.flink.configuration.Configuration) TableException(org.apache.flink.table.api.TableException) PythonFunctionInfo(org.apache.flink.table.functions.python.PythonFunctionInfo) OneInputTransformation(org.apache.flink.streaming.api.transformations.OneInputTransformation) InvocationTargetException(java.lang.reflect.InvocationTargetException) InternalTypeInfo(org.apache.flink.table.runtime.typeutils.InternalTypeInfo) ExecEdge(org.apache.flink.table.planner.plan.nodes.exec.ExecEdge) LogicalWindow(org.apache.flink.table.planner.plan.logical.LogicalWindow) AggregateCall(org.apache.calcite.rel.core.AggregateCall) ExecNodeBase(org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase) GeneratedProjection(org.apache.flink.table.runtime.generated.GeneratedProjection) Transformation(org.apache.flink.api.dag.Transformation) OneInputStreamOperator(org.apache.flink.streaming.api.operators.OneInputStreamOperator) ExecutionConfigOptions(org.apache.flink.table.api.config.ExecutionConfigOptions) WindowProperty(org.apache.flink.table.runtime.groupwindow.WindowProperty) Collections(java.util.Collections) NamedWindowProperty(org.apache.flink.table.runtime.groupwindow.NamedWindowProperty) RowtimeAttribute(org.apache.flink.table.runtime.groupwindow.RowtimeAttribute) PythonFunctionInfo(org.apache.flink.table.functions.python.PythonFunctionInfo) WindowProperty(org.apache.flink.table.runtime.groupwindow.WindowProperty) NamedWindowProperty(org.apache.flink.table.runtime.groupwindow.NamedWindowProperty) TableException(org.apache.flink.table.api.TableException) RowData(org.apache.flink.table.data.RowData) WindowStart(org.apache.flink.table.runtime.groupwindow.WindowStart) WindowEnd(org.apache.flink.table.runtime.groupwindow.WindowEnd)

Example 28 with TableException

use of org.apache.flink.table.api.TableException in project flink by apache.

the class BatchExecSortLimit method translateToPlanInternal.

@SuppressWarnings("unchecked")
@Override
protected Transformation<RowData> translateToPlanInternal(PlannerBase planner, ExecNodeConfig config) {
    if (limitEnd == Long.MAX_VALUE) {
        throw new TableException("Not support limitEnd is max value now!");
    }
    ExecEdge inputEdge = getInputEdges().get(0);
    Transformation<RowData> inputTransform = (Transformation<RowData>) inputEdge.translateToPlan(planner);
    RowType inputType = (RowType) inputEdge.getOutputType();
    // generate comparator
    GeneratedRecordComparator genComparator = ComparatorCodeGenerator.gen(config.getTableConfig(), "SortLimitComparator", inputType, sortSpec);
    // TODO If input is ordered, there is no need to use the heap.
    SortLimitOperator operator = new SortLimitOperator(isGlobal, limitStart, limitEnd, genComparator);
    return ExecNodeUtil.createOneInputTransformation(inputTransform, createTransformationName(config), createTransformationDescription(config), SimpleOperatorFactory.of(operator), InternalTypeInfo.of(inputType), inputTransform.getParallelism());
}
Also used : TableException(org.apache.flink.table.api.TableException) RowData(org.apache.flink.table.data.RowData) Transformation(org.apache.flink.api.dag.Transformation) ExecEdge(org.apache.flink.table.planner.plan.nodes.exec.ExecEdge) RowType(org.apache.flink.table.types.logical.RowType) GeneratedRecordComparator(org.apache.flink.table.runtime.generated.GeneratedRecordComparator) SortLimitOperator(org.apache.flink.table.runtime.operators.sort.SortLimitOperator)

Example 29 with TableException

use of org.apache.flink.table.api.TableException in project flink by apache.

the class SetOperationParseStrategy method convert.

@Override
public Operation convert(String statement) {
    Matcher matcher = pattern.matcher(statement.trim());
    final List<String> operands = new ArrayList<>();
    if (matcher.find()) {
        if (matcher.group("key") != null) {
            operands.add(matcher.group("key"));
            operands.add(matcher.group("quotedVal") != null ? matcher.group("quotedVal") : matcher.group("val"));
        }
    }
    // only capture SET
    if (operands.isEmpty()) {
        return new SetOperation();
    } else if (operands.size() == 2) {
        return new SetOperation(operands.get(0), operands.get(1));
    } else {
        // impossible
        throw new TableException(String.format("Failed to convert the statement to SET operation: %s.", statement));
    }
}
Also used : SetOperation(org.apache.flink.table.operations.command.SetOperation) TableException(org.apache.flink.table.api.TableException) Matcher(java.util.regex.Matcher) ArrayList(java.util.ArrayList)

Example 30 with TableException

use of org.apache.flink.table.api.TableException in project flink by apache.

the class WritingMetadataSpec method apply.

@Override
public void apply(DynamicTableSink tableSink) {
    if (tableSink instanceof SupportsWritingMetadata) {
        DataType consumedDataType = TypeConversions.fromLogicalToDataType(consumedType);
        ((SupportsWritingMetadata) tableSink).applyWritableMetadata(metadataKeys, consumedDataType);
    } else {
        throw new TableException(String.format("%s does not support SupportsWritingMetadata.", tableSink.getClass().getName()));
    }
}
Also used : TableException(org.apache.flink.table.api.TableException) SupportsWritingMetadata(org.apache.flink.table.connector.sink.abilities.SupportsWritingMetadata) DataType(org.apache.flink.table.types.DataType)

Aggregations

TableException (org.apache.flink.table.api.TableException)163 RowData (org.apache.flink.table.data.RowData)35 RowType (org.apache.flink.table.types.logical.RowType)35 Transformation (org.apache.flink.api.dag.Transformation)28 ArrayList (java.util.ArrayList)27 ExecEdge (org.apache.flink.table.planner.plan.nodes.exec.ExecEdge)24 LogicalType (org.apache.flink.table.types.logical.LogicalType)24 List (java.util.List)22 DataType (org.apache.flink.table.types.DataType)19 OneInputTransformation (org.apache.flink.streaming.api.transformations.OneInputTransformation)18 ValidationException (org.apache.flink.table.api.ValidationException)17 IOException (java.io.IOException)13 AggregateCall (org.apache.calcite.rel.core.AggregateCall)13 ValueLiteralExpression (org.apache.flink.table.expressions.ValueLiteralExpression)13 RowDataKeySelector (org.apache.flink.table.runtime.keyselector.RowDataKeySelector)13 Optional (java.util.Optional)11 Configuration (org.apache.flink.configuration.Configuration)11 StreamExecutionEnvironment (org.apache.flink.streaming.api.environment.StreamExecutionEnvironment)11 Constructor (java.lang.reflect.Constructor)10 Arrays (java.util.Arrays)9