Search in sources :

Example 11 with GenericTypeInfo

use of org.apache.flink.api.java.typeutils.GenericTypeInfo in project flink by apache.

the class NestedRowDataTest method getBinaryRowData.

private BinaryRowData getBinaryRowData() {
    BinaryRowData row = new BinaryRowData(1);
    BinaryRowWriter writer = new BinaryRowWriter(row);
    GenericTypeInfo<MyObj> info = new GenericTypeInfo<>(MyObj.class);
    TypeSerializer<MyObj> genericSerializer = info.createSerializer(new ExecutionConfig());
    GenericRowData gRow = new GenericRowData(5);
    gRow.setField(0, 1);
    gRow.setField(1, 5L);
    gRow.setField(2, StringData.fromString("12345678"));
    gRow.setField(3, null);
    gRow.setField(4, RawValueData.fromObject(new MyObj(15, 5)));
    RowDataSerializer serializer = new RowDataSerializer(new LogicalType[] { DataTypes.INT().getLogicalType(), DataTypes.BIGINT().getLogicalType(), DataTypes.STRING().getLogicalType(), DataTypes.STRING().getLogicalType(), DataTypes.RAW(info.getTypeClass(), info.createSerializer(new ExecutionConfig())).getLogicalType() }, new TypeSerializer[] { IntSerializer.INSTANCE, LongSerializer.INSTANCE, StringDataSerializer.INSTANCE, StringDataSerializer.INSTANCE, new RawValueDataSerializer<>(genericSerializer) });
    writer.writeRow(0, gRow, serializer);
    writer.complete();
    return row;
}
Also used : BinaryRowData(org.apache.flink.table.data.binary.BinaryRowData) BinaryRowWriter(org.apache.flink.table.data.writer.BinaryRowWriter) ExecutionConfig(org.apache.flink.api.common.ExecutionConfig) MyObj(org.apache.flink.table.data.util.DataFormatTestUtil.MyObj) GenericTypeInfo(org.apache.flink.api.java.typeutils.GenericTypeInfo) RowDataSerializer(org.apache.flink.table.runtime.typeutils.RowDataSerializer)

Example 12 with GenericTypeInfo

use of org.apache.flink.api.java.typeutils.GenericTypeInfo in project flink by apache.

the class NestedRowDataTest method testNestedRowDataWithOneSegment.

@Test
public void testNestedRowDataWithOneSegment() {
    BinaryRowData row = getBinaryRowData();
    GenericTypeInfo<MyObj> info = new GenericTypeInfo<>(MyObj.class);
    TypeSerializer<MyObj> genericSerializer = info.createSerializer(new ExecutionConfig());
    RowData nestedRow = row.getRow(0, 5);
    assertEquals(nestedRow.getInt(0), 1);
    assertEquals(nestedRow.getLong(1), 5L);
    assertEquals(nestedRow.getString(2), StringData.fromString("12345678"));
    assertTrue(nestedRow.isNullAt(3));
    assertEquals(new MyObj(15, 5), nestedRow.<MyObj>getRawValue(4).toObject(genericSerializer));
}
Also used : BinaryRowData(org.apache.flink.table.data.binary.BinaryRowData) NestedRowData(org.apache.flink.table.data.binary.NestedRowData) BinaryRowData(org.apache.flink.table.data.binary.BinaryRowData) ExecutionConfig(org.apache.flink.api.common.ExecutionConfig) MyObj(org.apache.flink.table.data.util.DataFormatTestUtil.MyObj) GenericTypeInfo(org.apache.flink.api.java.typeutils.GenericTypeInfo) Test(org.junit.Test)

Example 13 with GenericTypeInfo

use of org.apache.flink.api.java.typeutils.GenericTypeInfo in project flink by apache.

the class OuterJoinITCase method testJoinWithAtomicType2.

@Test
public void testJoinWithAtomicType2() throws Exception {
    final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
    DataSet<Integer> ds1 = env.fromElements(1, 2);
    DataSet<Tuple3<Integer, Long, String>> ds2 = CollectionDataSets.getSmall3TupleDataSet(env);
    DataSet<Tuple2<Integer, Tuple3<Integer, Long, String>>> joinDs = ds1.fullOuterJoin(ds2).where("*").equalTo(0).with(new ProjectBothFunction<Integer, Tuple3<Integer, Long, String>>()).returns(new GenericTypeInfo(Tuple2.class));
    List<Tuple2<Integer, Tuple3<Integer, Long, String>>> result = joinDs.collect();
    String expected = "1,(1,1,Hi)\n" + "2,(2,2,Hello)\n" + "null,(3,2,Hello world)\n";
    compareResultAsTuples(result, expected);
}
Also used : ExecutionEnvironment(org.apache.flink.api.java.ExecutionEnvironment) Tuple2(org.apache.flink.api.java.tuple.Tuple2) Tuple3(org.apache.flink.api.java.tuple.Tuple3) GenericTypeInfo(org.apache.flink.api.java.typeutils.GenericTypeInfo) Test(org.junit.Test)

Example 14 with GenericTypeInfo

use of org.apache.flink.api.java.typeutils.GenericTypeInfo in project beam by apache.

the class FlinkStateInternalsTest method createStateBackend.

public static KeyedStateBackend<ByteBuffer> createStateBackend() throws Exception {
    MemoryStateBackend backend = new MemoryStateBackend();
    AbstractKeyedStateBackend<ByteBuffer> keyedStateBackend = backend.createKeyedStateBackend(new DummyEnvironment("test", 1, 0), new JobID(), "test_op", new GenericTypeInfo<>(ByteBuffer.class).createSerializer(new ExecutionConfig()), 2, new KeyGroupRange(0, 1), new KvStateRegistry().createTaskRegistry(new JobID(), new JobVertexID()), TtlTimeProvider.DEFAULT, null, Collections.emptyList(), new CloseableRegistry());
    changeKey(keyedStateBackend);
    return keyedStateBackend;
}
Also used : KvStateRegistry(org.apache.flink.runtime.query.KvStateRegistry) MemoryStateBackend(org.apache.flink.runtime.state.memory.MemoryStateBackend) JobVertexID(org.apache.flink.runtime.jobgraph.JobVertexID) KeyGroupRange(org.apache.flink.runtime.state.KeyGroupRange) DummyEnvironment(org.apache.flink.runtime.operators.testutils.DummyEnvironment) ExecutionConfig(org.apache.flink.api.common.ExecutionConfig) CloseableRegistry(org.apache.flink.core.fs.CloseableRegistry) ByteBuffer(java.nio.ByteBuffer) JobID(org.apache.flink.api.common.JobID) GenericTypeInfo(org.apache.flink.api.java.typeutils.GenericTypeInfo)

Example 15 with GenericTypeInfo

use of org.apache.flink.api.java.typeutils.GenericTypeInfo in project flink by apache.

the class Kafka010ITCase method testTimestamps.

/**
	 * Kafka 0.10 specific test, ensuring Timestamps are properly written to and read from Kafka
	 */
@Test(timeout = 60000)
public void testTimestamps() throws Exception {
    final String topic = "tstopic";
    createTestTopic(topic, 3, 1);
    // ---------- Produce an event time stream into Kafka -------------------
    StreamExecutionEnvironment env = StreamExecutionEnvironment.createRemoteEnvironment("localhost", flinkPort);
    env.setParallelism(1);
    env.getConfig().setRestartStrategy(RestartStrategies.noRestart());
    env.getConfig().disableSysoutLogging();
    env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
    DataStream<Long> streamWithTimestamps = env.addSource(new SourceFunction<Long>() {

        boolean running = true;

        @Override
        public void run(SourceContext<Long> ctx) throws Exception {
            long i = 0;
            while (running) {
                ctx.collectWithTimestamp(i, i * 2);
                if (i++ == 1000L) {
                    running = false;
                }
            }
        }

        @Override
        public void cancel() {
            running = false;
        }
    });
    final TypeInformationSerializationSchema<Long> longSer = new TypeInformationSerializationSchema<>(TypeInfoParser.<Long>parse("Long"), env.getConfig());
    FlinkKafkaProducer010.FlinkKafkaProducer010Configuration prod = FlinkKafkaProducer010.writeToKafkaWithTimestamps(streamWithTimestamps, topic, new KeyedSerializationSchemaWrapper<>(longSer), standardProps, new KafkaPartitioner<Long>() {

        @Override
        public int partition(Long next, byte[] serializedKey, byte[] serializedValue, int numPartitions) {
            return (int) (next % 3);
        }
    });
    prod.setParallelism(3);
    prod.setWriteTimestampToKafka(true);
    env.execute("Produce some");
    // ---------- Consume stream from Kafka -------------------
    env = StreamExecutionEnvironment.createRemoteEnvironment("localhost", flinkPort);
    env.setParallelism(1);
    env.getConfig().setRestartStrategy(RestartStrategies.noRestart());
    env.getConfig().disableSysoutLogging();
    env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
    FlinkKafkaConsumer010<Long> kafkaSource = new FlinkKafkaConsumer010<>(topic, new LimitedLongDeserializer(), standardProps);
    kafkaSource.assignTimestampsAndWatermarks(new AssignerWithPunctuatedWatermarks<Long>() {

        @Nullable
        @Override
        public Watermark checkAndGetNextWatermark(Long lastElement, long extractedTimestamp) {
            if (lastElement % 10 == 0) {
                return new Watermark(lastElement);
            }
            return null;
        }

        @Override
        public long extractTimestamp(Long element, long previousElementTimestamp) {
            return previousElementTimestamp;
        }
    });
    DataStream<Long> stream = env.addSource(kafkaSource);
    GenericTypeInfo<Object> objectTypeInfo = new GenericTypeInfo<>(Object.class);
    stream.transform("timestamp validating operator", objectTypeInfo, new TimestampValidatingOperator()).setParallelism(1);
    env.execute("Consume again");
    deleteTestTopic(topic);
}
Also used : IOException(java.io.IOException) GenericTypeInfo(org.apache.flink.api.java.typeutils.GenericTypeInfo) TypeInformationSerializationSchema(org.apache.flink.streaming.util.serialization.TypeInformationSerializationSchema) StreamExecutionEnvironment(org.apache.flink.streaming.api.environment.StreamExecutionEnvironment) Watermark(org.apache.flink.streaming.api.watermark.Watermark) Nullable(javax.annotation.Nullable) Test(org.junit.Test)

Aggregations

GenericTypeInfo (org.apache.flink.api.java.typeutils.GenericTypeInfo)26 Test (org.junit.Test)18 ValueStateDescriptor (org.apache.flink.api.common.state.ValueStateDescriptor)10 ExecutionConfig (org.apache.flink.api.common.ExecutionConfig)9 KryoSerializer (org.apache.flink.api.java.typeutils.runtime.kryo.KryoSerializer)8 BlockerCheckpointStreamFactory (org.apache.flink.runtime.util.BlockerCheckpointStreamFactory)8 IOException (java.io.IOException)6 BinaryRowData (org.apache.flink.table.data.binary.BinaryRowData)5 MyObj (org.apache.flink.table.data.util.DataFormatTestUtil.MyObj)5 CancellationException (java.util.concurrent.CancellationException)4 JobID (org.apache.flink.api.common.JobID)4 StreamExecutionEnvironment (org.apache.flink.streaming.api.environment.StreamExecutionEnvironment)4 StateMigrationException (org.apache.flink.util.StateMigrationException)4 ExpectedException (org.junit.rules.ExpectedException)4 ByteBuffer (java.nio.ByteBuffer)3 Tuple2 (org.apache.flink.api.java.tuple.Tuple2)3 JobVertexID (org.apache.flink.runtime.jobgraph.JobVertexID)3 DummyEnvironment (org.apache.flink.runtime.operators.testutils.DummyEnvironment)3 KvStateRegistry (org.apache.flink.runtime.query.KvStateRegistry)3 MemoryStateBackend (org.apache.flink.runtime.state.memory.MemoryStateBackend)3