Search in sources :

Example 36 with ResolvedSchema

use of org.apache.flink.table.catalog.ResolvedSchema in project flink by apache.

the class JoinQueryOperation method calculateResultingSchema.

private ResolvedSchema calculateResultingSchema(QueryOperation left, QueryOperation right) {
    final ResolvedSchema leftSchema = left.getResolvedSchema();
    final ResolvedSchema rightSchema = right.getResolvedSchema();
    return ResolvedSchema.physical(Stream.concat(leftSchema.getColumnNames().stream(), rightSchema.getColumnNames().stream()).collect(Collectors.toList()), Stream.concat(leftSchema.getColumnDataTypes().stream(), rightSchema.getColumnDataTypes().stream()).collect(Collectors.toList()));
}
Also used : ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema)

Example 37 with ResolvedSchema

use of org.apache.flink.table.catalog.ResolvedSchema in project flink by apache.

the class DataStreamJavaITCase method testFromAndToDataStreamEventTime.

@Test
public void testFromAndToDataStreamEventTime() throws Exception {
    final StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
    final DataStream<Tuple3<Long, Integer, String>> dataStream = getWatermarkedDataStream();
    final Table table = tableEnv.fromDataStream(dataStream, Schema.newBuilder().columnByMetadata("rowtime", "TIMESTAMP_LTZ(3)").watermark("rowtime", "SOURCE_WATERMARK()").build());
    testSchema(table, new ResolvedSchema(Arrays.asList(Column.physical("f0", BIGINT().notNull()), Column.physical("f1", INT().notNull()), Column.physical("f2", STRING()), Column.metadata("rowtime", TIMESTAMP_LTZ(3), null, false)), Collections.singletonList(WatermarkSpec.of("rowtime", ResolvedExpressionMock.of(TIMESTAMP_LTZ(3), "`SOURCE_WATERMARK`()"))), null));
    tableEnv.createTemporaryView("t", table);
    final TableResult result = tableEnv.executeSql("SELECT f2, SUM(f1) FROM t GROUP BY f2, TUMBLE(rowtime, INTERVAL '0.005' SECOND)");
    testResult(result, Row.of("a", 47), Row.of("c", 1000), Row.of("c", 1000));
    testResult(tableEnv.toDataStream(table).keyBy(k -> k.getField("f2")).window(TumblingEventTimeWindows.of(Time.milliseconds(5))).<Row>apply((key, window, input, out) -> {
        int sum = 0;
        for (Row row : input) {
            sum += row.<Integer>getFieldAs("f1");
        }
        out.collect(Row.of(key, sum));
    }).returns(Types.ROW(Types.STRING, Types.INT)), Row.of("a", 47), Row.of("c", 1000), Row.of("c", 1000));
}
Also used : Table(org.apache.flink.table.api.Table) TableResult(org.apache.flink.table.api.TableResult) Tuple3(org.apache.flink.api.java.tuple.Tuple3) StreamTableEnvironment(org.apache.flink.table.api.bridge.java.StreamTableEnvironment) Row(org.apache.flink.types.Row) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) TypeHint(org.apache.flink.api.common.typeinfo.TypeHint) Test(org.junit.Test)

Example 38 with ResolvedSchema

use of org.apache.flink.table.catalog.ResolvedSchema in project flink by apache.

the class DataStreamJavaITCase method testToDataStreamCustomEventTime.

@Test
public void testToDataStreamCustomEventTime() throws Exception {
    final StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
    final TableConfig tableConfig = tableEnv.getConfig();
    // session time zone should not have an impact on the conversion
    final ZoneId originalZone = tableConfig.getLocalTimeZone();
    tableConfig.setLocalTimeZone(ZoneId.of("Europe/Berlin"));
    final LocalDateTime localDateTime1 = LocalDateTime.parse("1970-01-01T00:00:00.000");
    final LocalDateTime localDateTime2 = LocalDateTime.parse("1970-01-01T01:00:00.000");
    final DataStream<Tuple2<LocalDateTime, String>> dataStream = env.fromElements(new Tuple2<>(localDateTime1, "alice"), new Tuple2<>(localDateTime2, "bob"));
    final Table table = tableEnv.fromDataStream(dataStream, Schema.newBuilder().column("f0", "TIMESTAMP(3)").column("f1", "STRING").watermark("f0", "SOURCE_WATERMARK()").build());
    testSchema(table, new ResolvedSchema(Arrays.asList(Column.physical("f0", TIMESTAMP(3)), Column.physical("f1", STRING())), Collections.singletonList(WatermarkSpec.of("f0", ResolvedExpressionMock.of(TIMESTAMP(3), "`SOURCE_WATERMARK`()"))), null));
    final DataStream<Long> rowtimeStream = tableEnv.toDataStream(table).process(new ProcessFunction<Row, Long>() {

        @Override
        public void processElement(Row value, Context ctx, Collector<Long> out) {
            out.collect(ctx.timestamp());
        }
    });
    testResult(rowtimeStream, localDateTime1.atOffset(ZoneOffset.UTC).toInstant().toEpochMilli(), localDateTime2.atOffset(ZoneOffset.UTC).toInstant().toEpochMilli());
    tableConfig.setLocalTimeZone(originalZone);
}
Also used : LocalDateTime(java.time.LocalDateTime) Table(org.apache.flink.table.api.Table) ZoneId(java.time.ZoneId) Tuple2(org.apache.flink.api.java.tuple.Tuple2) TableConfig(org.apache.flink.table.api.TableConfig) StreamTableEnvironment(org.apache.flink.table.api.bridge.java.StreamTableEnvironment) Row(org.apache.flink.types.Row) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) Test(org.junit.Test)

Example 39 with ResolvedSchema

use of org.apache.flink.table.catalog.ResolvedSchema in project flink by apache.

the class ContextResolvedTableJsonDeserializer method deserialize.

@Override
public ContextResolvedTable deserialize(JsonParser jsonParser, DeserializationContext ctx) throws IOException {
    final CatalogPlanRestore planRestoreOption = SerdeContext.get(ctx).getConfiguration().get(PLAN_RESTORE_CATALOG_OBJECTS);
    final CatalogManager catalogManager = SerdeContext.get(ctx).getFlinkContext().getCatalogManager();
    final ObjectNode objectNode = jsonParser.readValueAsTree();
    // Deserialize the two fields, if available
    final ObjectIdentifier identifier = JsonSerdeUtil.deserializeOptionalField(objectNode, FIELD_NAME_IDENTIFIER, ObjectIdentifier.class, jsonParser.getCodec(), ctx).orElse(null);
    ResolvedCatalogTable resolvedCatalogTable = JsonSerdeUtil.deserializeOptionalField(objectNode, FIELD_NAME_CATALOG_TABLE, ResolvedCatalogTable.class, jsonParser.getCodec(), ctx).orElse(null);
    if (identifier == null && resolvedCatalogTable == null) {
        throw new ValidationException(String.format("The input JSON is invalid because it doesn't contain '%s', nor the '%s'.", FIELD_NAME_IDENTIFIER, FIELD_NAME_CATALOG_TABLE));
    }
    if (identifier == null) {
        if (isLookupForced(planRestoreOption)) {
            throw missingIdentifier();
        }
        return ContextResolvedTable.anonymous(resolvedCatalogTable);
    }
    Optional<ContextResolvedTable> contextResolvedTableFromCatalog = isLookupEnabled(planRestoreOption) ? catalogManager.getTable(identifier) : Optional.empty();
    // If we have a schema from the plan and from the catalog, we need to check they match.
    if (contextResolvedTableFromCatalog.isPresent() && resolvedCatalogTable != null) {
        ResolvedSchema schemaFromPlan = resolvedCatalogTable.getResolvedSchema();
        ResolvedSchema schemaFromCatalog = contextResolvedTableFromCatalog.get().getResolvedSchema();
        if (!areResolvedSchemasEqual(schemaFromPlan, schemaFromCatalog)) {
            throw schemaNotMatching(identifier, schemaFromPlan, schemaFromCatalog);
        }
    }
    if (resolvedCatalogTable == null || isLookupForced(planRestoreOption)) {
        if (!isLookupEnabled(planRestoreOption)) {
            throw lookupDisabled(identifier);
        }
        // We use what is stored inside the catalog
        return contextResolvedTableFromCatalog.orElseThrow(() -> missingTableFromCatalog(identifier, isLookupForced(planRestoreOption)));
    }
    if (contextResolvedTableFromCatalog.isPresent()) {
        // SCHEMA, so we just need to return the catalog query result
        if (objectNode.at("/" + FIELD_NAME_CATALOG_TABLE + "/" + OPTIONS).isMissingNode()) {
            return contextResolvedTableFromCatalog.get();
        }
        return contextResolvedTableFromCatalog.flatMap(ContextResolvedTable::getCatalog).map(c -> ContextResolvedTable.permanent(identifier, c, resolvedCatalogTable)).orElseGet(() -> ContextResolvedTable.temporary(identifier, resolvedCatalogTable));
    }
    return ContextResolvedTable.temporary(identifier, resolvedCatalogTable);
}
Also used : CatalogManager(org.apache.flink.table.catalog.CatalogManager) ObjectIdentifier(org.apache.flink.table.catalog.ObjectIdentifier) FIELD_NAME_IDENTIFIER(org.apache.flink.table.planner.plan.nodes.exec.serde.ContextResolvedTableJsonSerializer.FIELD_NAME_IDENTIFIER) Column(org.apache.flink.table.catalog.Column) ObjectNode(org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ObjectNode) IDENTIFIER(org.apache.flink.table.api.config.TableConfigOptions.CatalogPlanRestore.IDENTIFIER) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) JsonParser(org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonParser) IOException(java.io.IOException) PLAN_COMPILE_CATALOG_OBJECTS(org.apache.flink.table.api.config.TableConfigOptions.PLAN_COMPILE_CATALOG_OBJECTS) CatalogPlanRestore(org.apache.flink.table.api.config.TableConfigOptions.CatalogPlanRestore) CatalogPlanCompilation(org.apache.flink.table.api.config.TableConfigOptions.CatalogPlanCompilation) Objects(java.util.Objects) OPTIONS(org.apache.flink.table.planner.plan.nodes.exec.serde.ResolvedCatalogTableJsonSerializer.OPTIONS) DeserializationContext(org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.DeserializationContext) List(java.util.List) ValidationException(org.apache.flink.table.api.ValidationException) Optional(java.util.Optional) Internal(org.apache.flink.annotation.Internal) StdDeserializer(org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.deser.std.StdDeserializer) PLAN_RESTORE_CATALOG_OBJECTS(org.apache.flink.table.api.config.TableConfigOptions.PLAN_RESTORE_CATALOG_OBJECTS) ResolvedCatalogTable(org.apache.flink.table.catalog.ResolvedCatalogTable) FIELD_NAME_CATALOG_TABLE(org.apache.flink.table.planner.plan.nodes.exec.serde.ContextResolvedTableJsonSerializer.FIELD_NAME_CATALOG_TABLE) ContextResolvedTable(org.apache.flink.table.catalog.ContextResolvedTable) ValidationException(org.apache.flink.table.api.ValidationException) ObjectNode(org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.node.ObjectNode) ResolvedCatalogTable(org.apache.flink.table.catalog.ResolvedCatalogTable) CatalogPlanRestore(org.apache.flink.table.api.config.TableConfigOptions.CatalogPlanRestore) ContextResolvedTable(org.apache.flink.table.catalog.ContextResolvedTable) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema) CatalogManager(org.apache.flink.table.catalog.CatalogManager) ObjectIdentifier(org.apache.flink.table.catalog.ObjectIdentifier)

Example 40 with ResolvedSchema

use of org.apache.flink.table.catalog.ResolvedSchema in project zeppelin by apache.

the class Flink114Shims method rowToString.

@Override
public String[] rowToString(Object row, Object table, Object tableConfig) {
    final String zone = ((TableConfig) tableConfig).getConfiguration().get(TableConfigOptions.LOCAL_TIME_ZONE);
    ZoneId zoneId = TableConfigOptions.LOCAL_TIME_ZONE.defaultValue().equals(zone) ? ZoneId.systemDefault() : ZoneId.of(zone);
    ResolvedSchema resolvedSchema = ((Table) table).getResolvedSchema();
    return PrintUtils.rowToString((Row) row, resolvedSchema, zoneId);
}
Also used : Table(org.apache.flink.table.api.Table) ZoneId(java.time.ZoneId) AttributedString(org.jline.utils.AttributedString) ResolvedSchema(org.apache.flink.table.catalog.ResolvedSchema)

Aggregations

ResolvedSchema (org.apache.flink.table.catalog.ResolvedSchema)84 Test (org.junit.Test)50 DynamicTableSink (org.apache.flink.table.connector.sink.DynamicTableSink)20 DataType (org.apache.flink.table.types.DataType)20 RowData (org.apache.flink.table.data.RowData)17 ValidationException (org.apache.flink.table.api.ValidationException)14 ResolvedCatalogTable (org.apache.flink.table.catalog.ResolvedCatalogTable)14 List (java.util.List)11 SinkRuntimeProviderContext (org.apache.flink.table.runtime.connector.sink.SinkRuntimeProviderContext)11 DynamicTableSource (org.apache.flink.table.connector.source.DynamicTableSource)10 Column (org.apache.flink.table.catalog.Column)9 LogicalType (org.apache.flink.table.types.logical.LogicalType)9 RowType (org.apache.flink.table.types.logical.RowType)9 HashMap (java.util.HashMap)8 Collectors (java.util.stream.Collectors)8 RelDataType (org.apache.calcite.rel.type.RelDataType)8 Internal (org.apache.flink.annotation.Internal)8 HBaseWriteOptions (org.apache.flink.connector.hbase.options.HBaseWriteOptions)6 FlinkTypeFactory (org.apache.flink.table.planner.calcite.FlinkTypeFactory)6 Row (org.apache.flink.types.Row)6