Search in sources :

Example 1 with DataStructureConverter

use of org.apache.flink.table.data.conversion.DataStructureConverter in project flink by apache.

the class BaseMaterializedResultTest method createInternalBinaryRowDataConverter.

static Function<Row, BinaryRowData> createInternalBinaryRowDataConverter(DataType dataType) {
    DataStructureConverter<Object, Object> converter = DataStructureConverters.getConverter(dataType);
    RowDataSerializer serializer = new RowDataSerializer((RowType) dataType.getLogicalType());
    return row -> serializer.toBinaryRow((RowData) converter.toInternalOrNull(row)).copy();
}
Also used : DataType(org.apache.flink.table.types.DataType) List(java.util.List) RowData(org.apache.flink.table.data.RowData) DataStructureConverter(org.apache.flink.table.data.conversion.DataStructureConverter) DataStructureConverters(org.apache.flink.table.data.conversion.DataStructureConverters) RowDataSerializer(org.apache.flink.table.runtime.typeutils.RowDataSerializer) Row(org.apache.flink.types.Row) Assertions.assertEquals(org.junit.jupiter.api.Assertions.assertEquals) BinaryRowData(org.apache.flink.table.data.binary.BinaryRowData) RowType(org.apache.flink.table.types.logical.RowType) Function(java.util.function.Function) Collectors(java.util.stream.Collectors) RowData(org.apache.flink.table.data.RowData) BinaryRowData(org.apache.flink.table.data.binary.BinaryRowData) RowDataSerializer(org.apache.flink.table.runtime.typeutils.RowDataSerializer)

Example 2 with DataStructureConverter

use of org.apache.flink.table.data.conversion.DataStructureConverter in project flink by apache.

the class CommonExecLookupJoin method createAsyncLookupJoin.

@SuppressWarnings("unchecked")
private StreamOperatorFactory<RowData> createAsyncLookupJoin(RelOptTable temporalTable, ExecNodeConfig config, Map<Integer, LookupJoinUtil.LookupKey> allLookupKeys, AsyncTableFunction<Object> asyncLookupFunction, RelBuilder relBuilder, RowType inputRowType, RowType tableSourceRowType, RowType resultRowType, boolean isLeftOuterJoin) {
    int asyncBufferCapacity = config.get(ExecutionConfigOptions.TABLE_EXEC_ASYNC_LOOKUP_BUFFER_CAPACITY);
    long asyncTimeout = config.get(ExecutionConfigOptions.TABLE_EXEC_ASYNC_LOOKUP_TIMEOUT).toMillis();
    DataTypeFactory dataTypeFactory = ShortcutUtils.unwrapContext(relBuilder).getCatalogManager().getDataTypeFactory();
    LookupJoinCodeGenerator.GeneratedTableFunctionWithDataType<AsyncFunction<RowData, Object>> generatedFuncWithType = LookupJoinCodeGenerator.generateAsyncLookupFunction(config.getTableConfig(), dataTypeFactory, inputRowType, tableSourceRowType, resultRowType, allLookupKeys, LookupJoinUtil.getOrderedLookupKeys(allLookupKeys.keySet()), asyncLookupFunction, StringUtils.join(temporalTable.getQualifiedName(), "."));
    RowType rightRowType = Optional.ofNullable(temporalTableOutputType).map(FlinkTypeFactory::toLogicalRowType).orElse(tableSourceRowType);
    // a projection or filter after table source scan
    GeneratedResultFuture<TableFunctionResultFuture<RowData>> generatedResultFuture = LookupJoinCodeGenerator.generateTableAsyncCollector(config.getTableConfig(), "TableFunctionResultFuture", inputRowType, rightRowType, JavaScalaConversionUtil.toScala(Optional.ofNullable(joinCondition)));
    DataStructureConverter<?, ?> fetcherConverter = DataStructureConverters.getConverter(generatedFuncWithType.dataType());
    AsyncFunction<RowData, RowData> asyncFunc;
    if (existCalcOnTemporalTable) {
        // a projection or filter after table source scan
        GeneratedFunction<FlatMapFunction<RowData, RowData>> generatedCalc = LookupJoinCodeGenerator.generateCalcMapFunction(config.getTableConfig(), JavaScalaConversionUtil.toScala(projectionOnTemporalTable), filterOnTemporalTable, temporalTableOutputType, tableSourceRowType);
        asyncFunc = new AsyncLookupJoinWithCalcRunner(generatedFuncWithType.tableFunc(), (DataStructureConverter<RowData, Object>) fetcherConverter, generatedCalc, generatedResultFuture, InternalSerializers.create(rightRowType), isLeftOuterJoin, asyncBufferCapacity);
    } else {
        // right type is the same as table source row type, because no calc after temporal table
        asyncFunc = new AsyncLookupJoinRunner(generatedFuncWithType.tableFunc(), (DataStructureConverter<RowData, Object>) fetcherConverter, generatedResultFuture, InternalSerializers.create(rightRowType), isLeftOuterJoin, asyncBufferCapacity);
    }
    // when the downstream do not need orderness
    return new AsyncWaitOperatorFactory<>(asyncFunc, asyncTimeout, asyncBufferCapacity, AsyncDataStream.OutputMode.ORDERED);
}
Also used : AsyncLookupJoinRunner(org.apache.flink.table.runtime.operators.join.lookup.AsyncLookupJoinRunner) AsyncWaitOperatorFactory(org.apache.flink.streaming.api.operators.async.AsyncWaitOperatorFactory) DataStructureConverter(org.apache.flink.table.data.conversion.DataStructureConverter) RowType(org.apache.flink.table.types.logical.RowType) DataTypeFactory(org.apache.flink.table.catalog.DataTypeFactory) AsyncLookupJoinWithCalcRunner(org.apache.flink.table.runtime.operators.join.lookup.AsyncLookupJoinWithCalcRunner) AsyncFunction(org.apache.flink.streaming.api.functions.async.AsyncFunction) LookupJoinCodeGenerator(org.apache.flink.table.planner.codegen.LookupJoinCodeGenerator) RowData(org.apache.flink.table.data.RowData) TableFunctionResultFuture(org.apache.flink.table.runtime.collector.TableFunctionResultFuture) FlatMapFunction(org.apache.flink.api.common.functions.FlatMapFunction)

Example 3 with DataStructureConverter

use of org.apache.flink.table.data.conversion.DataStructureConverter in project flink by apache.

the class MaterializedCollectBatchResultTest method testSnapshot.

@Test
public void testSnapshot() throws Exception {
    final ResolvedSchema schema = ResolvedSchema.physical(new String[] { "f0", "f1" }, new DataType[] { DataTypes.STRING(), DataTypes.INT() });
    @SuppressWarnings({ "unchecked", "rawtypes" }) final DataStructureConverter<RowData, Row> rowConverter = (DataStructureConverter) DataStructureConverters.getConverter(schema.toPhysicalRowDataType());
    try (TestMaterializedCollectBatchResult result = new TestMaterializedCollectBatchResult(new TestTableResult(ResultKind.SUCCESS_WITH_CONTENT, schema), Integer.MAX_VALUE, createInternalBinaryRowDataConverter(schema.toPhysicalRowDataType()))) {
        result.isRetrieving = true;
        result.processRecord(Row.of("A", 1));
        result.processRecord(Row.of("B", 1));
        result.processRecord(Row.of("A", 1));
        result.processRecord(Row.of("C", 2));
        assertEquals(TypedResult.payload(4), result.snapshot(1));
        assertRowEquals(Collections.singletonList(Row.of("A", 1)), result.retrievePage(1), rowConverter);
        assertRowEquals(Collections.singletonList(Row.of("B", 1)), result.retrievePage(2), rowConverter);
        assertRowEquals(Collections.singletonList(Row.of("A", 1)), result.retrievePage(3), rowConverter);
        assertRowEquals(Collections.singletonList(Row.of("C", 2)), result.retrievePage(4), rowConverter);
        result.processRecord(Row.of("A", 1));
        assertEquals(TypedResult.payload(5), result.snapshot(1));
        assertRowEquals(Collections.singletonList(Row.of("A", 1)), result.retrievePage(1), rowConverter);
        assertRowEquals(Collections.singletonList(Row.of("B", 1)), result.retrievePage(2), rowConverter);
        assertRowEquals(Collections.singletonList(Row.of("A", 1)), result.retrievePage(3), rowConverter);
        assertRowEquals(Collections.singletonList(Row.of("C", 2)), result.retrievePage(4), rowConverter);
        assertRowEquals(Collections.singletonList(Row.of("A", 1)), result.retrievePage(5), rowConverter);
    }
}
Also used : RowData(org.apache.flink.table.data.RowData) BinaryRowData(org.apache.flink.table.data.binary.BinaryRowData) DataStructureConverter(org.apache.flink.table.data.conversion.DataStructureConverter) Row(org.apache.flink.types.Row) TestTableResult(org.apache.flink.table.client.cli.utils.TestTableResult) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) Test(org.junit.Test)

Example 4 with DataStructureConverter

use of org.apache.flink.table.data.conversion.DataStructureConverter in project flink by apache.

the class MaterializedCollectBatchResultTest method testLimitedSnapshot.

@Test
public void testLimitedSnapshot() throws Exception {
    final ResolvedSchema schema = ResolvedSchema.physical(new String[] { "f0", "f1" }, new DataType[] { DataTypes.STRING(), DataTypes.INT() });
    @SuppressWarnings({ "unchecked", "rawtypes" }) final DataStructureConverter<RowData, Row> rowConverter = (DataStructureConverter) DataStructureConverters.getConverter(schema.toPhysicalRowDataType());
    try (TestMaterializedCollectBatchResult result = new TestMaterializedCollectBatchResult(new TestTableResult(ResultKind.SUCCESS_WITH_CONTENT, schema), // limit the materialized table to 2 rows
    2, 3, createInternalBinaryRowDataConverter(schema.toPhysicalRowDataType()))) {
        // with 3 rows overcommitment
        result.isRetrieving = true;
        result.processRecord(Row.of("D", 1));
        result.processRecord(Row.of("A", 1));
        result.processRecord(Row.of("B", 1));
        result.processRecord(Row.of("A", 1));
        assertRowEquals(Arrays.asList(null, null, Row.of("B", 1), // two over-committed rows
        Row.of("A", 1)), result.getMaterializedTable(), rowConverter);
        assertEquals(TypedResult.payload(2), result.snapshot(1));
        assertRowEquals(Collections.singletonList(Row.of("B", 1)), result.retrievePage(1), rowConverter);
        assertRowEquals(Collections.singletonList(Row.of("A", 1)), result.retrievePage(2), rowConverter);
        result.processRecord(Row.of("C", 1));
        assertRowEquals(// limit clean up has taken place
        Arrays.asList(Row.of("A", 1), Row.of("C", 1)), result.getMaterializedTable(), rowConverter);
        result.processRecord(Row.of("A", 1));
        assertRowEquals(Arrays.asList(null, Row.of("C", 1), Row.of("A", 1)), result.getMaterializedTable(), rowConverter);
    }
}
Also used : RowData(org.apache.flink.table.data.RowData) BinaryRowData(org.apache.flink.table.data.binary.BinaryRowData) DataStructureConverter(org.apache.flink.table.data.conversion.DataStructureConverter) Row(org.apache.flink.types.Row) TestTableResult(org.apache.flink.table.client.cli.utils.TestTableResult) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) Test(org.junit.Test)

Example 5 with DataStructureConverter

use of org.apache.flink.table.data.conversion.DataStructureConverter in project flink by apache.

the class MaterializedCollectStreamResultTest method testSnapshot.

@Test
public void testSnapshot() throws Exception {
    final ResolvedSchema schema = ResolvedSchema.physical(new String[] { "f0", "f1" }, new DataType[] { DataTypes.STRING(), DataTypes.INT() });
    @SuppressWarnings({ "unchecked", "rawtypes" }) final DataStructureConverter<RowData, Row> rowConverter = (DataStructureConverter) DataStructureConverters.getConverter(schema.toPhysicalRowDataType());
    try (TestMaterializedCollectStreamResult result = new TestMaterializedCollectStreamResult(new TestTableResult(ResultKind.SUCCESS_WITH_CONTENT, schema), Integer.MAX_VALUE, createInternalBinaryRowDataConverter(schema.toPhysicalRowDataType()))) {
        result.isRetrieving = true;
        result.processRecord(Row.ofKind(RowKind.INSERT, "A", 1));
        result.processRecord(Row.ofKind(RowKind.INSERT, "B", 1));
        result.processRecord(Row.ofKind(RowKind.INSERT, "A", 1));
        result.processRecord(Row.ofKind(RowKind.INSERT, "C", 2));
        assertEquals(TypedResult.payload(4), result.snapshot(1));
        assertRowEquals(Collections.singletonList(Row.of("A", 1)), result.retrievePage(1), rowConverter);
        assertRowEquals(Collections.singletonList(Row.of("B", 1)), result.retrievePage(2), rowConverter);
        assertRowEquals(Collections.singletonList(Row.of("A", 1)), result.retrievePage(3), rowConverter);
        assertRowEquals(Collections.singletonList(Row.of("C", 2)), result.retrievePage(4), rowConverter);
        result.processRecord(Row.ofKind(RowKind.UPDATE_BEFORE, "A", 1));
        assertEquals(TypedResult.payload(3), result.snapshot(1));
        assertRowEquals(Collections.singletonList(Row.of("A", 1)), result.retrievePage(1), rowConverter);
        assertRowEquals(Collections.singletonList(Row.of("B", 1)), result.retrievePage(2), rowConverter);
        assertRowEquals(Collections.singletonList(Row.of("C", 2)), result.retrievePage(3), rowConverter);
        result.processRecord(Row.ofKind(RowKind.UPDATE_BEFORE, "C", 2));
        result.processRecord(Row.ofKind(RowKind.UPDATE_BEFORE, "A", 1));
        result.processRecord(Row.ofKind(RowKind.UPDATE_AFTER, "D", 1));
        assertEquals(TypedResult.payload(2), result.snapshot(1));
        assertRowEquals(Collections.singletonList(Row.of("B", 1)), result.retrievePage(1), rowConverter);
        assertRowEquals(Collections.singletonList(Row.of("D", 1)), result.retrievePage(2), rowConverter);
    }
}
Also used : RowData(org.apache.flink.table.data.RowData) BinaryRowData(org.apache.flink.table.data.binary.BinaryRowData) DataStructureConverter(org.apache.flink.table.data.conversion.DataStructureConverter) Row(org.apache.flink.types.Row) TestTableResult(org.apache.flink.table.client.cli.utils.TestTableResult) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) Test(org.junit.Test)

Aggregations

RowData (org.apache.flink.table.data.RowData)6 DataStructureConverter (org.apache.flink.table.data.conversion.DataStructureConverter)6 BinaryRowData (org.apache.flink.table.data.binary.BinaryRowData)5 Row (org.apache.flink.types.Row)5 ResolvedSchema (org.apache.flink.table.catalog.ResolvedSchema)4 TestTableResult (org.apache.flink.table.client.cli.utils.TestTableResult)4 Test (org.junit.Test)4 RowType (org.apache.flink.table.types.logical.RowType)2 List (java.util.List)1 Function (java.util.function.Function)1 Collectors (java.util.stream.Collectors)1 FlatMapFunction (org.apache.flink.api.common.functions.FlatMapFunction)1 AsyncFunction (org.apache.flink.streaming.api.functions.async.AsyncFunction)1 AsyncWaitOperatorFactory (org.apache.flink.streaming.api.operators.async.AsyncWaitOperatorFactory)1 DataTypeFactory (org.apache.flink.table.catalog.DataTypeFactory)1 DataStructureConverters (org.apache.flink.table.data.conversion.DataStructureConverters)1 LookupJoinCodeGenerator (org.apache.flink.table.planner.codegen.LookupJoinCodeGenerator)1 TableFunctionResultFuture (org.apache.flink.table.runtime.collector.TableFunctionResultFuture)1 AsyncLookupJoinRunner (org.apache.flink.table.runtime.operators.join.lookup.AsyncLookupJoinRunner)1 AsyncLookupJoinWithCalcRunner (org.apache.flink.table.runtime.operators.join.lookup.AsyncLookupJoinWithCalcRunner)1