Search in sources :

Example 1 with SortSpec

use of org.apache.flink.table.planner.plan.nodes.exec.spec.SortSpec in project flink by apache.

the class StreamExecMatch method checkOrderKeys.

private void checkOrderKeys(RowType inputRowType) {
    SortSpec orderKeys = matchSpec.getOrderKeys();
    if (orderKeys.getFieldSize() == 0) {
        throw new TableException("You must specify either rowtime or proctime for order by.");
    }
    SortSpec.SortFieldSpec timeOrderField = orderKeys.getFieldSpec(0);
    int timeOrderFieldIdx = timeOrderField.getFieldIndex();
    LogicalType timeOrderFieldType = inputRowType.getTypeAt(timeOrderFieldIdx);
    // need to identify time between others order fields. Time needs to be first sort element
    if (!TypeCheckUtils.isRowTime(timeOrderFieldType) && !TypeCheckUtils.isProcTime(timeOrderFieldType)) {
        throw new TableException("You must specify either rowtime or proctime for order by as the first one.");
    }
    // time ordering needs to be ascending
    if (!orderKeys.getAscendingOrders()[0]) {
        throw new TableException("Primary sort order of a streaming table must be ascending on time.");
    }
}
Also used : TableException(org.apache.flink.table.api.TableException) LogicalType(org.apache.flink.table.types.logical.LogicalType) SortSpec(org.apache.flink.table.planner.plan.nodes.exec.spec.SortSpec)

Example 2 with SortSpec

use of org.apache.flink.table.planner.plan.nodes.exec.spec.SortSpec in project flink by apache.

the class StreamExecTemporalSort method createSortProcTime.

/**
 * Create Sort logic based on processing time.
 */
private Transformation<RowData> createSortProcTime(RowType inputType, Transformation<RowData> inputTransform, ExecNodeConfig config) {
    // if the order has secondary sorting fields in addition to the proctime
    if (sortSpec.getFieldSize() > 1) {
        // skip the first field which is the proctime field and would be ordered by timer.
        SortSpec specExcludeTime = sortSpec.createSubSortSpec(1);
        GeneratedRecordComparator rowComparator = ComparatorCodeGenerator.gen(config.getTableConfig(), "ProcTimeSortComparator", inputType, specExcludeTime);
        ProcTimeSortOperator sortOperator = new ProcTimeSortOperator(InternalTypeInfo.of(inputType), rowComparator);
        OneInputTransformation<RowData, RowData> transform = ExecNodeUtil.createOneInputTransformation(inputTransform, createTransformationMeta(TEMPORAL_SORT_TRANSFORMATION, config), sortOperator, InternalTypeInfo.of(inputType), inputTransform.getParallelism());
        // as input node is singleton exchange, its parallelism is 1.
        if (inputsContainSingleton()) {
            transform.setParallelism(1);
            transform.setMaxParallelism(1);
        }
        EmptyRowDataKeySelector selector = EmptyRowDataKeySelector.INSTANCE;
        transform.setStateKeySelector(selector);
        transform.setStateKeyType(selector.getProducedType());
        return transform;
    } else {
        // if the order is done only on proctime we only need to forward the elements
        return inputTransform;
    }
}
Also used : RowData(org.apache.flink.table.data.RowData) ProcTimeSortOperator(org.apache.flink.table.runtime.operators.sort.ProcTimeSortOperator) GeneratedRecordComparator(org.apache.flink.table.runtime.generated.GeneratedRecordComparator) EmptyRowDataKeySelector(org.apache.flink.table.runtime.keyselector.EmptyRowDataKeySelector) SortSpec(org.apache.flink.table.planner.plan.nodes.exec.spec.SortSpec)

Example 3 with SortSpec

use of org.apache.flink.table.planner.plan.nodes.exec.spec.SortSpec in project flink by apache.

the class StreamExecTemporalSort method createSortRowTime.

/**
 * Create Sort logic based on row time.
 */
private Transformation<RowData> createSortRowTime(RowType inputType, Transformation<RowData> inputTransform, ExecNodeConfig config) {
    GeneratedRecordComparator rowComparator = null;
    if (sortSpec.getFieldSize() > 1) {
        // skip the first field which is the rowtime field and would be ordered by timer.
        SortSpec specExcludeTime = sortSpec.createSubSortSpec(1);
        rowComparator = ComparatorCodeGenerator.gen(config.getTableConfig(), "RowTimeSortComparator", inputType, specExcludeTime);
    }
    RowTimeSortOperator sortOperator = new RowTimeSortOperator(InternalTypeInfo.of(inputType), sortSpec.getFieldSpec(0).getFieldIndex(), rowComparator);
    OneInputTransformation<RowData, RowData> transform = ExecNodeUtil.createOneInputTransformation(inputTransform, createTransformationMeta(TEMPORAL_SORT_TRANSFORMATION, config), sortOperator, InternalTypeInfo.of(inputType), inputTransform.getParallelism());
    if (inputsContainSingleton()) {
        transform.setParallelism(1);
        transform.setMaxParallelism(1);
    }
    EmptyRowDataKeySelector selector = EmptyRowDataKeySelector.INSTANCE;
    transform.setStateKeySelector(selector);
    transform.setStateKeyType(selector.getProducedType());
    return transform;
}
Also used : RowData(org.apache.flink.table.data.RowData) RowTimeSortOperator(org.apache.flink.table.runtime.operators.sort.RowTimeSortOperator) GeneratedRecordComparator(org.apache.flink.table.runtime.generated.GeneratedRecordComparator) EmptyRowDataKeySelector(org.apache.flink.table.runtime.keyselector.EmptyRowDataKeySelector) SortSpec(org.apache.flink.table.planner.plan.nodes.exec.spec.SortSpec)

Example 4 with SortSpec

use of org.apache.flink.table.planner.plan.nodes.exec.spec.SortSpec in project flink by apache.

the class BatchExecPythonOverAggregate method getPythonOverWindowAggregateFunctionOperator.

@SuppressWarnings("unchecked")
private OneInputStreamOperator<RowData, RowData> getPythonOverWindowAggregateFunctionOperator(ExecNodeConfig config, Configuration pythonConfig, RowType inputRowType, RowType outputRowType, boolean[] isRangeWindows, int[] udafInputOffsets, PythonFunctionInfo[] pythonFunctionInfos) {
    Class<?> clazz = CommonPythonUtil.loadClass(ARROW_PYTHON_OVER_WINDOW_AGGREGATE_FUNCTION_OPERATOR_NAME);
    RowType udfInputType = (RowType) Projection.of(udafInputOffsets).project(inputRowType);
    RowType udfOutputType = (RowType) Projection.range(inputRowType.getFieldCount(), outputRowType.getFieldCount()).project(outputRowType);
    PartitionSpec partitionSpec = overSpec.getPartition();
    List<OverSpec.GroupSpec> groups = overSpec.getGroups();
    SortSpec sortSpec = groups.get(groups.size() - 1).getSort();
    try {
        Constructor<?> ctor = clazz.getConstructor(Configuration.class, PythonFunctionInfo[].class, RowType.class, RowType.class, RowType.class, long[].class, long[].class, boolean[].class, int[].class, int.class, boolean.class, GeneratedProjection.class, GeneratedProjection.class, GeneratedProjection.class);
        return (OneInputStreamOperator<RowData, RowData>) ctor.newInstance(pythonConfig, pythonFunctionInfos, inputRowType, udfInputType, udfOutputType, lowerBoundary.stream().mapToLong(i -> i).toArray(), upperBoundary.stream().mapToLong(i -> i).toArray(), isRangeWindows, aggWindowIndex.stream().mapToInt(i -> i).toArray(), sortSpec.getFieldIndices()[0], sortSpec.getAscendingOrders()[0], ProjectionCodeGenerator.generateProjection(CodeGeneratorContext.apply(config.getTableConfig()), "UdafInputProjection", inputRowType, udfInputType, udafInputOffsets), ProjectionCodeGenerator.generateProjection(CodeGeneratorContext.apply(config.getTableConfig()), "GroupKey", inputRowType, (RowType) Projection.of(partitionSpec.getFieldIndices()).project(inputRowType), partitionSpec.getFieldIndices()), ProjectionCodeGenerator.generateProjection(CodeGeneratorContext.apply(config.getTableConfig()), "GroupSet", inputRowType, (RowType) Projection.of(partitionSpec.getFieldIndices()).project(inputRowType), partitionSpec.getFieldIndices()));
    } catch (NoSuchMethodException | InstantiationException | IllegalAccessException | InvocationTargetException e) {
        throw new TableException("Python BatchArrowPythonOverWindowAggregateFunctionOperator constructed failed.", e);
    }
}
Also used : OverAggregateUtil(org.apache.flink.table.planner.plan.utils.OverAggregateUtil) InputProperty(org.apache.flink.table.planner.plan.nodes.exec.InputProperty) Tuple2(org.apache.flink.api.java.tuple.Tuple2) RowType(org.apache.flink.table.types.logical.RowType) Constructor(java.lang.reflect.Constructor) ExecNode(org.apache.flink.table.planner.plan.nodes.exec.ExecNode) ArrayList(java.util.ArrayList) ExecNodeUtil(org.apache.flink.table.planner.plan.nodes.exec.utils.ExecNodeUtil) ManagedMemoryUseCase(org.apache.flink.core.memory.ManagedMemoryUseCase) PartitionSpec(org.apache.flink.table.planner.plan.nodes.exec.spec.PartitionSpec) CodeGeneratorContext(org.apache.flink.table.planner.codegen.CodeGeneratorContext) Projection(org.apache.flink.table.connector.Projection) ProjectionCodeGenerator(org.apache.flink.table.planner.codegen.ProjectionCodeGenerator) ExecNodeContext(org.apache.flink.table.planner.plan.nodes.exec.ExecNodeContext) RowData(org.apache.flink.table.data.RowData) PlannerBase(org.apache.flink.table.planner.delegation.PlannerBase) CommonPythonUtil(org.apache.flink.table.planner.plan.nodes.exec.utils.CommonPythonUtil) ExecNodeConfig(org.apache.flink.table.planner.plan.nodes.exec.ExecNodeConfig) Configuration(org.apache.flink.configuration.Configuration) TableException(org.apache.flink.table.api.TableException) PythonFunctionInfo(org.apache.flink.table.functions.python.PythonFunctionInfo) OverSpec(org.apache.flink.table.planner.plan.nodes.exec.spec.OverSpec) OneInputTransformation(org.apache.flink.streaming.api.transformations.OneInputTransformation) InvocationTargetException(java.lang.reflect.InvocationTargetException) List(java.util.List) InternalTypeInfo(org.apache.flink.table.runtime.typeutils.InternalTypeInfo) ExecEdge(org.apache.flink.table.planner.plan.nodes.exec.ExecEdge) AggregateCall(org.apache.calcite.rel.core.AggregateCall) GeneratedProjection(org.apache.flink.table.runtime.generated.GeneratedProjection) Transformation(org.apache.flink.api.dag.Transformation) OneInputStreamOperator(org.apache.flink.streaming.api.operators.OneInputStreamOperator) SortSpec(org.apache.flink.table.planner.plan.nodes.exec.spec.SortSpec) PythonFunctionInfo(org.apache.flink.table.functions.python.PythonFunctionInfo) TableException(org.apache.flink.table.api.TableException) RowType(org.apache.flink.table.types.logical.RowType) PartitionSpec(org.apache.flink.table.planner.plan.nodes.exec.spec.PartitionSpec) InvocationTargetException(java.lang.reflect.InvocationTargetException) OneInputStreamOperator(org.apache.flink.streaming.api.operators.OneInputStreamOperator) SortSpec(org.apache.flink.table.planner.plan.nodes.exec.spec.SortSpec)

Example 5 with SortSpec

use of org.apache.flink.table.planner.plan.nodes.exec.spec.SortSpec in project flink by apache.

the class SortCodeGeneratorTest method testOneKey.

@Test
public void testOneKey() throws Exception {
    for (int time = 0; time < 100; time++) {
        Random rnd = new Random();
        LogicalType[] fields = new LogicalType[rnd.nextInt(9) + 1];
        for (int i = 0; i < fields.length; i++) {
            fields[i] = types[rnd.nextInt(types.length)];
        }
        inputType = RowType.of(fields);
        SortSpec.SortSpecBuilder builder = SortSpec.builder();
        boolean order = rnd.nextBoolean();
        builder.addField(0, order, SortUtil.getNullDefaultOrder(order));
        sortSpec = builder.build();
        testInner();
    }
}
Also used : Random(java.util.Random) ThreadLocalRandom(java.util.concurrent.ThreadLocalRandom) LogicalType(org.apache.flink.table.types.logical.LogicalType) SortSpec(org.apache.flink.table.planner.plan.nodes.exec.spec.SortSpec) Test(org.junit.Test)

Aggregations

SortSpec (org.apache.flink.table.planner.plan.nodes.exec.spec.SortSpec)11 RowData (org.apache.flink.table.data.RowData)6 GeneratedRecordComparator (org.apache.flink.table.runtime.generated.GeneratedRecordComparator)5 LogicalType (org.apache.flink.table.types.logical.LogicalType)5 RowType (org.apache.flink.table.types.logical.RowType)5 Transformation (org.apache.flink.api.dag.Transformation)4 TableException (org.apache.flink.table.api.TableException)4 ExecEdge (org.apache.flink.table.planner.plan.nodes.exec.ExecEdge)4 Random (java.util.Random)3 OneInputTransformation (org.apache.flink.streaming.api.transformations.OneInputTransformation)3 Test (org.junit.Test)3 ArrayList (java.util.ArrayList)2 List (java.util.List)2 ThreadLocalRandom (java.util.concurrent.ThreadLocalRandom)2 BinaryRowData (org.apache.flink.table.data.binary.BinaryRowData)2 CodeGeneratorContext (org.apache.flink.table.planner.codegen.CodeGeneratorContext)2 EmptyRowDataKeySelector (org.apache.flink.table.runtime.keyselector.EmptyRowDataKeySelector)2 RowDataKeySelector (org.apache.flink.table.runtime.keyselector.RowDataKeySelector)2 IOException (java.io.IOException)1 OutputStream (java.io.OutputStream)1