Search in sources :

Example 46 with Transformation

use of org.apache.flink.api.dag.Transformation in project flink by apache.

the class StreamExecDataStreamScan method translateToPlanInternal.

@SuppressWarnings("unchecked")
@Override
protected Transformation<RowData> translateToPlanInternal(PlannerBase planner, ExecNodeConfig config) {
    final Transformation<?> sourceTransform = dataStream.getTransformation();
    final Optional<RexNode> rowtimeExpr = getRowtimeExpression(planner.getRelBuilder());
    final Transformation<RowData> transformation;
    // conversion.
    if (rowtimeExpr.isPresent() || ScanUtil.needsConversion(sourceType)) {
        final String extractElement, resetElement;
        if (ScanUtil.hasTimeAttributeField(fieldIndexes)) {
            String elementTerm = OperatorCodeGenerator.ELEMENT();
            extractElement = String.format("ctx.%s = %s;", elementTerm, elementTerm);
            resetElement = String.format("ctx.%s = null;", elementTerm);
        } else {
            extractElement = "";
            resetElement = "";
        }
        final CodeGeneratorContext ctx = new CodeGeneratorContext(config.getTableConfig()).setOperatorBaseClass(TableStreamOperator.class);
        transformation = ScanUtil.convertToInternalRow(ctx, (Transformation<Object>) sourceTransform, fieldIndexes, sourceType, (RowType) getOutputType(), qualifiedName, (detailName, simplifyName) -> createFormattedTransformationName(detailName, simplifyName, config), (description) -> createFormattedTransformationDescription(description, config), JavaScalaConversionUtil.toScala(rowtimeExpr), extractElement, resetElement);
    } else {
        transformation = (Transformation<RowData>) sourceTransform;
    }
    return transformation;
}
Also used : TableStreamOperator(org.apache.flink.table.runtime.operators.TableStreamOperator) DataType(org.apache.flink.table.types.DataType) Arrays(java.util.Arrays) MultipleTransformationTranslator(org.apache.flink.table.planner.plan.nodes.exec.MultipleTransformationTranslator) RowType(org.apache.flink.table.types.logical.RowType) FlinkRelBuilder(org.apache.flink.table.planner.calcite.FlinkRelBuilder) ExecNode(org.apache.flink.table.planner.plan.nodes.exec.ExecNode) TimestampType(org.apache.flink.table.types.logical.TimestampType) TimestampKind(org.apache.flink.table.types.logical.TimestampKind) ScanUtil(org.apache.flink.table.planner.plan.utils.ScanUtil) RexNode(org.apache.calcite.rex.RexNode) ROWTIME_STREAM_MARKER(org.apache.flink.table.typeutils.TimeIndicatorTypeInfo.ROWTIME_STREAM_MARKER) CodeGeneratorContext(org.apache.flink.table.planner.codegen.CodeGeneratorContext) TypeCheckUtils(org.apache.flink.table.runtime.typeutils.TypeCheckUtils) ExecNodeContext(org.apache.flink.table.planner.plan.nodes.exec.ExecNodeContext) RowData(org.apache.flink.table.data.RowData) PlannerBase(org.apache.flink.table.planner.delegation.PlannerBase) ExecNodeConfig(org.apache.flink.table.planner.plan.nodes.exec.ExecNodeConfig) Collectors(java.util.stream.Collectors) DataStream(org.apache.flink.streaming.api.datastream.DataStream) OperatorCodeGenerator(org.apache.flink.table.planner.codegen.OperatorCodeGenerator) List(java.util.List) LogicalTypeDataTypeConverter.fromDataTypeToLogicalType(org.apache.flink.table.runtime.types.LogicalTypeDataTypeConverter.fromDataTypeToLogicalType) LogicalType(org.apache.flink.table.types.logical.LogicalType) FlinkSqlOperatorTable(org.apache.flink.table.planner.functions.sql.FlinkSqlOperatorTable) JavaScalaConversionUtil(org.apache.flink.table.planner.utils.JavaScalaConversionUtil) Optional(java.util.Optional) ExecNodeBase(org.apache.flink.table.planner.plan.nodes.exec.ExecNodeBase) Transformation(org.apache.flink.api.dag.Transformation) Collections(java.util.Collections) RowData(org.apache.flink.table.data.RowData) Transformation(org.apache.flink.api.dag.Transformation) CodeGeneratorContext(org.apache.flink.table.planner.codegen.CodeGeneratorContext) RowType(org.apache.flink.table.types.logical.RowType) RexNode(org.apache.calcite.rex.RexNode)

Example 47 with Transformation

use of org.apache.flink.api.dag.Transformation in project flink by apache.

the class KafkaDynamicTableFactoryTest method assertKafkaSource.

private KafkaSource<?> assertKafkaSource(ScanTableSource.ScanRuntimeProvider provider) {
    assertThat(provider).isInstanceOf(DataStreamScanProvider.class);
    final DataStreamScanProvider dataStreamScanProvider = (DataStreamScanProvider) provider;
    final Transformation<RowData> transformation = dataStreamScanProvider.produceDataStream(n -> Optional.empty(), StreamExecutionEnvironment.createLocalEnvironment()).getTransformation();
    assertThat(transformation).isInstanceOf(SourceTransformation.class);
    SourceTransformation<RowData, KafkaPartitionSplit, KafkaSourceEnumState> sourceTransformation = (SourceTransformation<RowData, KafkaPartitionSplit, KafkaSourceEnumState>) transformation;
    assertThat(sourceTransformation.getSource()).isInstanceOf(KafkaSource.class);
    return (KafkaSource<?>) sourceTransformation.getSource();
}
Also used : DataType(org.apache.flink.table.types.DataType) ConfigOptions(org.apache.flink.configuration.ConfigOptions) Arrays(java.util.Arrays) Assertions.assertThat(org.assertj.core.api.Assertions.assertThat) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) SourceTransformation(org.apache.flink.streaming.api.transformations.SourceTransformation) DataStreamScanProvider(org.apache.flink.table.connector.source.DataStreamScanProvider) DecodingFormat(org.apache.flink.table.connector.format.DecodingFormat) ExtendWith(org.junit.jupiter.api.extension.ExtendWith) Map(java.util.Map) FactoryMocks.createTableSink(org.apache.flink.table.factories.utils.FactoryMocks.createTableSink) FlinkFixedPartitioner(org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner) ConfluentRegistryAvroSerializationSchema(org.apache.flink.formats.avro.registry.confluent.ConfluentRegistryAvroSerializationSchema) DynamicTableSource(org.apache.flink.table.connector.source.DynamicTableSource) DynamicTableSink(org.apache.flink.table.connector.sink.DynamicTableSink) KafkaTopicPartition(org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition) Set(java.util.Set) EncodingFormatMock(org.apache.flink.table.factories.TestFormatFactory.EncodingFormatMock) ConsumerConfig(org.apache.kafka.clients.consumer.ConsumerConfig) AVRO_CONFLUENT(org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptionsUtil.AVRO_CONFLUENT) ResolvedExpressionMock(org.apache.flink.table.expressions.utils.ResolvedExpressionMock) AvroRowDataSerializationSchema(org.apache.flink.formats.avro.AvroRowDataSerializationSchema) Test(org.junit.jupiter.api.Test) List(java.util.List) FactoryUtil(org.apache.flink.table.factories.FactoryUtil) ValidationException(org.apache.flink.table.api.ValidationException) FlinkAssertions.containsCause(org.apache.flink.core.testutils.FlinkAssertions.containsCause) Optional(java.util.Optional) Pattern(java.util.regex.Pattern) ScanRuntimeProviderContext(org.apache.flink.table.runtime.connector.source.ScanRuntimeProviderContext) SerializationSchema(org.apache.flink.api.common.serialization.SerializationSchema) StreamExecutionEnvironment(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment) TestFormatFactory(org.apache.flink.table.factories.TestFormatFactory) DeliveryGuarantee(org.apache.flink.connector.base.DeliveryGuarantee) EncodingFormat(org.apache.flink.table.connector.format.EncodingFormat) Sink(org.apache.flink.api.connector.sink2.Sink) ChangelogMode(org.apache.flink.table.connector.ChangelogMode) Column(org.apache.flink.table.catalog.Column) HashMap(java.util.HashMap) RowType(org.apache.flink.table.types.logical.RowType) ScanTableSource(org.apache.flink.table.connector.source.ScanTableSource) SinkV2Provider(org.apache.flink.table.connector.sink.SinkV2Provider) HashSet(java.util.HashSet) TestLoggerExtension(org.apache.flink.util.TestLoggerExtension) PROPERTIES_PREFIX(org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptionsUtil.PROPERTIES_PREFIX) KafkaSink(org.apache.flink.connector.kafka.sink.KafkaSink) Assertions.assertThatThrownBy(org.assertj.core.api.Assertions.assertThatThrownBy) Assertions.assertThatExceptionOfType(org.assertj.core.api.Assertions.assertThatExceptionOfType) RowDataToAvroConverters(org.apache.flink.formats.avro.RowDataToAvroConverters) KafkaSourceOptions(org.apache.flink.connector.kafka.source.KafkaSourceOptions) FactoryMocks.createTableSource(org.apache.flink.table.factories.utils.FactoryMocks.createTableSource) Nullable(javax.annotation.Nullable) ValueSource(org.junit.jupiter.params.provider.ValueSource) DEBEZIUM_AVRO_CONFLUENT(org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptionsUtil.DEBEZIUM_AVRO_CONFLUENT) RowData(org.apache.flink.table.data.RowData) Properties(java.util.Properties) WatermarkSpec(org.apache.flink.table.catalog.WatermarkSpec) Configuration(org.apache.flink.configuration.Configuration) DataTypes(org.apache.flink.table.api.DataTypes) ScanStartupMode(org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode) KafkaSourceEnumState(org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumState) FlinkKafkaPartitioner(org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner) DeserializationSchema(org.apache.flink.api.common.serialization.DeserializationSchema) Consumer(java.util.function.Consumer) StartupMode(org.apache.flink.streaming.connectors.kafka.config.StartupMode) ParameterizedTest(org.junit.jupiter.params.ParameterizedTest) KafkaSource(org.apache.flink.connector.kafka.source.KafkaSource) UniqueConstraint(org.apache.flink.table.catalog.UniqueConstraint) DecodingFormatMock(org.apache.flink.table.factories.TestFormatFactory.DecodingFormatMock) SinkRuntimeProviderContext(org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext) ImmutableList(org.apache.flink.shaded.guava30.com.google.common.collect.ImmutableList) KafkaSourceTestUtils(org.apache.flink.connector.kafka.source.KafkaSourceTestUtils) FactoryMocks(org.apache.flink.table.factories.utils.FactoryMocks) KafkaPartitionSplit(org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit) DebeziumAvroSerializationSchema(org.apache.flink.formats.avro.registry.confluent.debezium.DebeziumAvroSerializationSchema) NullSource(org.junit.jupiter.params.provider.NullSource) Transformation(org.apache.flink.api.dag.Transformation) Collections(java.util.Collections) AvroSchemaConverter(org.apache.flink.formats.avro.typeutils.AvroSchemaConverter) KafkaPartitionSplit(org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit) RowData(org.apache.flink.table.data.RowData) KafkaSource(org.apache.flink.connector.kafka.source.KafkaSource) KafkaSourceEnumState(org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumState) DataStreamScanProvider(org.apache.flink.table.connector.source.DataStreamScanProvider) SourceTransformation(org.apache.flink.streaming.api.transformations.SourceTransformation)

Example 48 with Transformation

use of org.apache.flink.api.dag.Transformation in project flink by apache.

the class PythonOperatorChainingOptimizerTest method testChainingNonKeyedOperators.

@Test
public void testChainingNonKeyedOperators() {
    PythonProcessOperator<?, ?> processOperator1 = createProcessOperator("f1", new RowTypeInfo(Types.INT(), Types.INT()), Types.STRING());
    PythonProcessOperator<?, ?> processOperator2 = createProcessOperator("f2", Types.STRING(), Types.INT());
    Transformation<?> sourceTransformation = mock(SourceTransformation.class);
    OneInputTransformation<?, ?> processTransformation1 = new OneInputTransformation(sourceTransformation, "Process1", processOperator1, processOperator1.getProducedType(), 2);
    Transformation<?> processTransformation2 = new OneInputTransformation(processTransformation1, "process2", processOperator2, processOperator2.getProducedType(), 2);
    List<Transformation<?>> transformations = new ArrayList<>();
    transformations.add(sourceTransformation);
    transformations.add(processTransformation1);
    transformations.add(processTransformation2);
    List<Transformation<?>> optimized = PythonOperatorChainingOptimizer.optimize(transformations);
    assertEquals(2, optimized.size());
    OneInputTransformation<?, ?> chainedTransformation = (OneInputTransformation<?, ?>) optimized.get(1);
    assertEquals(sourceTransformation.getOutputType(), chainedTransformation.getInputType());
    assertEquals(processOperator2.getProducedType(), chainedTransformation.getOutputType());
    OneInputStreamOperator<?, ?> chainedOperator = chainedTransformation.getOperator();
    assertTrue(chainedOperator instanceof PythonProcessOperator);
    validateChainedPythonFunctions(((PythonProcessOperator<?, ?>) chainedOperator).getPythonFunctionInfo(), "f2", "f1");
}
Also used : SourceTransformation(org.apache.flink.streaming.api.transformations.SourceTransformation) TwoInputTransformation(org.apache.flink.streaming.api.transformations.TwoInputTransformation) OneInputTransformation(org.apache.flink.streaming.api.transformations.OneInputTransformation) Transformation(org.apache.flink.api.dag.Transformation) ArrayList(java.util.ArrayList) RowTypeInfo(org.apache.flink.api.java.typeutils.RowTypeInfo) OneInputTransformation(org.apache.flink.streaming.api.transformations.OneInputTransformation) PythonProcessOperator(org.apache.flink.streaming.api.operators.python.PythonProcessOperator) Test(org.junit.Test)

Example 49 with Transformation

use of org.apache.flink.api.dag.Transformation in project flink by apache.

the class PythonOperatorChainingOptimizerTest method testChainedTransformationPropertiesCorrectlySet.

@Test
public void testChainedTransformationPropertiesCorrectlySet() {
    PythonKeyedProcessOperator<?> keyedProcessOperator = createKeyedProcessOperator("f1", new RowTypeInfo(Types.INT(), Types.INT()), Types.STRING());
    PythonProcessOperator<?, ?> processOperator = createProcessOperator("f2", Types.STRING(), Types.STRING());
    Transformation<?> sourceTransformation = mock(SourceTransformation.class);
    OneInputTransformation<?, ?> keyedProcessTransformation = new OneInputTransformation(sourceTransformation, "keyedProcess", keyedProcessOperator, keyedProcessOperator.getProducedType(), 2);
    keyedProcessTransformation.setUid("uid");
    keyedProcessTransformation.setSlotSharingGroup("group");
    keyedProcessTransformation.setCoLocationGroupKey("col");
    keyedProcessTransformation.setMaxParallelism(64);
    keyedProcessTransformation.declareManagedMemoryUseCaseAtOperatorScope(ManagedMemoryUseCase.OPERATOR, 5);
    keyedProcessTransformation.declareManagedMemoryUseCaseAtSlotScope(ManagedMemoryUseCase.PYTHON);
    keyedProcessTransformation.declareManagedMemoryUseCaseAtSlotScope(ManagedMemoryUseCase.STATE_BACKEND);
    keyedProcessTransformation.setBufferTimeout(1000L);
    keyedProcessTransformation.setChainingStrategy(ChainingStrategy.HEAD);
    Transformation<?> processTransformation = new OneInputTransformation(keyedProcessTransformation, "process", processOperator, processOperator.getProducedType(), 2);
    processTransformation.setSlotSharingGroup("group");
    processTransformation.declareManagedMemoryUseCaseAtOperatorScope(ManagedMemoryUseCase.OPERATOR, 10);
    processTransformation.declareManagedMemoryUseCaseAtSlotScope(ManagedMemoryUseCase.PYTHON);
    processTransformation.setMaxParallelism(64);
    processTransformation.setBufferTimeout(500L);
    List<Transformation<?>> transformations = new ArrayList<>();
    transformations.add(sourceTransformation);
    transformations.add(keyedProcessTransformation);
    transformations.add(processTransformation);
    List<Transformation<?>> optimized = PythonOperatorChainingOptimizer.optimize(transformations);
    assertEquals(2, optimized.size());
    OneInputTransformation<?, ?> chainedTransformation = (OneInputTransformation<?, ?>) optimized.get(1);
    assertEquals(2, chainedTransformation.getParallelism());
    assertEquals(sourceTransformation.getOutputType(), chainedTransformation.getInputType());
    assertEquals(processOperator.getProducedType(), chainedTransformation.getOutputType());
    assertEquals(keyedProcessTransformation.getUid(), chainedTransformation.getUid());
    assertEquals("group", chainedTransformation.getSlotSharingGroup().get().getName());
    assertEquals("col", chainedTransformation.getCoLocationGroupKey());
    assertEquals(64, chainedTransformation.getMaxParallelism());
    assertEquals(500L, chainedTransformation.getBufferTimeout());
    assertEquals(15, (int) chainedTransformation.getManagedMemoryOperatorScopeUseCaseWeights().getOrDefault(ManagedMemoryUseCase.OPERATOR, 0));
    assertEquals(ChainingStrategy.HEAD, chainedTransformation.getOperatorFactory().getChainingStrategy());
    assertTrue(chainedTransformation.getManagedMemorySlotScopeUseCases().contains(ManagedMemoryUseCase.PYTHON));
    assertTrue(chainedTransformation.getManagedMemorySlotScopeUseCases().contains(ManagedMemoryUseCase.STATE_BACKEND));
    OneInputStreamOperator<?, ?> chainedOperator = chainedTransformation.getOperator();
    assertTrue(chainedOperator instanceof PythonKeyedProcessOperator);
    validateChainedPythonFunctions(((PythonKeyedProcessOperator<?>) chainedOperator).getPythonFunctionInfo(), "f2", "f1");
}
Also used : SourceTransformation(org.apache.flink.streaming.api.transformations.SourceTransformation) TwoInputTransformation(org.apache.flink.streaming.api.transformations.TwoInputTransformation) OneInputTransformation(org.apache.flink.streaming.api.transformations.OneInputTransformation) Transformation(org.apache.flink.api.dag.Transformation) PythonKeyedProcessOperator(org.apache.flink.streaming.api.operators.python.PythonKeyedProcessOperator) ArrayList(java.util.ArrayList) RowTypeInfo(org.apache.flink.api.java.typeutils.RowTypeInfo) OneInputTransformation(org.apache.flink.streaming.api.transformations.OneInputTransformation) Test(org.junit.Test)

Example 50 with Transformation

use of org.apache.flink.api.dag.Transformation in project flink by apache.

the class PythonOperatorChainingOptimizerTest method testChainingTwoInputOperators.

@Test
public void testChainingTwoInputOperators() {
    PythonKeyedCoProcessOperator<?> keyedCoProcessOperator1 = createCoKeyedProcessOperator("f1", new RowTypeInfo(Types.INT(), Types.STRING()), new RowTypeInfo(Types.INT(), Types.INT()), Types.STRING());
    PythonProcessOperator<?, ?> processOperator1 = createProcessOperator("f2", new RowTypeInfo(Types.INT(), Types.INT()), Types.STRING());
    PythonProcessOperator<?, ?> processOperator2 = createProcessOperator("f3", new RowTypeInfo(Types.INT(), Types.INT()), Types.LONG());
    PythonKeyedProcessOperator<?> keyedProcessOperator2 = createKeyedProcessOperator("f4", new RowTypeInfo(Types.INT(), Types.INT()), Types.STRING());
    PythonProcessOperator<?, ?> processOperator3 = createProcessOperator("f5", new RowTypeInfo(Types.INT(), Types.INT()), Types.STRING());
    Transformation<?> sourceTransformation1 = mock(SourceTransformation.class);
    Transformation<?> sourceTransformation2 = mock(SourceTransformation.class);
    TwoInputTransformation<?, ?, ?> keyedCoProcessTransformation = new TwoInputTransformation(sourceTransformation1, sourceTransformation2, "keyedCoProcess", keyedCoProcessOperator1, keyedCoProcessOperator1.getProducedType(), 2);
    Transformation<?> processTransformation1 = new OneInputTransformation(keyedCoProcessTransformation, "process", processOperator1, processOperator1.getProducedType(), 2);
    Transformation<?> processTransformation2 = new OneInputTransformation(processTransformation1, "process", processOperator2, processOperator2.getProducedType(), 2);
    OneInputTransformation<?, ?> keyedProcessTransformation = new OneInputTransformation(processTransformation2, "keyedProcess", keyedProcessOperator2, keyedProcessOperator2.getProducedType(), 2);
    Transformation<?> processTransformation3 = new OneInputTransformation(keyedProcessTransformation, "process", processOperator3, processOperator3.getProducedType(), 2);
    List<Transformation<?>> transformations = new ArrayList<>();
    transformations.add(sourceTransformation1);
    transformations.add(sourceTransformation2);
    transformations.add(keyedCoProcessTransformation);
    transformations.add(processTransformation1);
    transformations.add(processTransformation2);
    transformations.add(keyedProcessTransformation);
    transformations.add(processTransformation3);
    List<Transformation<?>> optimized = PythonOperatorChainingOptimizer.optimize(transformations);
    assertEquals(4, optimized.size());
    TwoInputTransformation<?, ?, ?> chainedTransformation1 = (TwoInputTransformation<?, ?, ?>) optimized.get(2);
    assertEquals(sourceTransformation1.getOutputType(), chainedTransformation1.getInputType1());
    assertEquals(sourceTransformation2.getOutputType(), chainedTransformation1.getInputType2());
    assertEquals(processOperator2.getProducedType(), chainedTransformation1.getOutputType());
    OneInputTransformation<?, ?> chainedTransformation2 = (OneInputTransformation<?, ?>) optimized.get(3);
    assertEquals(processOperator2.getProducedType(), chainedTransformation2.getInputType());
    assertEquals(processOperator3.getProducedType(), chainedTransformation2.getOutputType());
    TwoInputStreamOperator<?, ?, ?> chainedOperator1 = chainedTransformation1.getOperator();
    assertTrue(chainedOperator1 instanceof PythonKeyedCoProcessOperator);
    validateChainedPythonFunctions(((PythonKeyedCoProcessOperator<?>) chainedOperator1).getPythonFunctionInfo(), "f3", "f2", "f1");
    OneInputStreamOperator<?, ?> chainedOperator2 = chainedTransformation2.getOperator();
    assertTrue(chainedOperator2 instanceof PythonKeyedProcessOperator);
    validateChainedPythonFunctions(((PythonKeyedProcessOperator<?>) chainedOperator2).getPythonFunctionInfo(), "f5", "f4");
}
Also used : SourceTransformation(org.apache.flink.streaming.api.transformations.SourceTransformation) TwoInputTransformation(org.apache.flink.streaming.api.transformations.TwoInputTransformation) OneInputTransformation(org.apache.flink.streaming.api.transformations.OneInputTransformation) Transformation(org.apache.flink.api.dag.Transformation) PythonKeyedProcessOperator(org.apache.flink.streaming.api.operators.python.PythonKeyedProcessOperator) ArrayList(java.util.ArrayList) RowTypeInfo(org.apache.flink.api.java.typeutils.RowTypeInfo) TwoInputTransformation(org.apache.flink.streaming.api.transformations.TwoInputTransformation) PythonKeyedCoProcessOperator(org.apache.flink.streaming.api.operators.python.PythonKeyedCoProcessOperator) OneInputTransformation(org.apache.flink.streaming.api.transformations.OneInputTransformation) Test(org.junit.Test)

Aggregations

Transformation (org.apache.flink.api.dag.Transformation)98 RowData (org.apache.flink.table.data.RowData)69 ExecEdge (org.apache.flink.table.planner.plan.nodes.exec.ExecEdge)53 RowType (org.apache.flink.table.types.logical.RowType)50 OneInputTransformation (org.apache.flink.streaming.api.transformations.OneInputTransformation)45 TableException (org.apache.flink.table.api.TableException)28 RowDataKeySelector (org.apache.flink.table.runtime.keyselector.RowDataKeySelector)28 ArrayList (java.util.ArrayList)25 CodeGeneratorContext (org.apache.flink.table.planner.codegen.CodeGeneratorContext)21 Configuration (org.apache.flink.configuration.Configuration)19 TwoInputTransformation (org.apache.flink.streaming.api.transformations.TwoInputTransformation)18 List (java.util.List)17 PartitionTransformation (org.apache.flink.streaming.api.transformations.PartitionTransformation)17 AggregateInfoList (org.apache.flink.table.planner.plan.utils.AggregateInfoList)17 LogicalType (org.apache.flink.table.types.logical.LogicalType)16 Test (org.junit.Test)16 StreamExecutionEnvironment (org.apache.flink.streaming.api.environment.StreamExecutionEnvironment)13 SourceTransformation (org.apache.flink.streaming.api.transformations.SourceTransformation)13 Arrays (java.util.Arrays)11 Collections (java.util.Collections)10