use of org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer in project flink by apache.
the class GlueSchemaRegistryJsonKinesisITCase method createSource.
private FlinkKinesisConsumer<JsonDataWithSchema> createSource() {
Properties properties = KINESALITE.getContainerProperties();
properties.setProperty(STREAM_INITIAL_POSITION, ConsumerConfigConstants.InitialPosition.TRIM_HORIZON.name());
return new FlinkKinesisConsumer<>(INPUT_STREAM, new GlueSchemaRegistryJsonDeserializationSchema<>(JsonDataWithSchema.class, INPUT_STREAM, getConfigs()), properties);
}
use of org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer in project flink by apache.
the class KinesisExample method main.
public static void main(String[] args) throws Exception {
// parse input arguments
final ParameterTool parameterTool = ParameterTool.fromArgs(args);
StreamExecutionEnvironment env = KafkaExampleUtil.prepareExecutionEnv(parameterTool);
String inputStream = parameterTool.getRequired("input-stream");
String outputStream = parameterTool.getRequired("output-stream");
FlinkKinesisConsumer<KafkaEvent> consumer = new FlinkKinesisConsumer<>(inputStream, new KafkaEventSchema(), parameterTool.getProperties());
consumer.setPeriodicWatermarkAssigner(new CustomWatermarkExtractor());
Properties producerProperties = new Properties(parameterTool.getProperties());
// producer needs region even when URL is specified
producerProperties.putIfAbsent(ConsumerConfigConstants.AWS_REGION, "us-east-1");
// test driver does not deaggregate
producerProperties.putIfAbsent("AggregationEnabled", String.valueOf(false));
// KPL does not recognize endpoint URL..
String kinesisUrl = producerProperties.getProperty(ConsumerConfigConstants.AWS_ENDPOINT);
if (kinesisUrl != null) {
URL url = new URL(kinesisUrl);
producerProperties.put("KinesisEndpoint", url.getHost());
producerProperties.put("KinesisPort", Integer.toString(url.getPort()));
producerProperties.put("VerifyCertificate", "false");
}
FlinkKinesisProducer<KafkaEvent> producer = new FlinkKinesisProducer<>(new KafkaEventSchema(), producerProperties);
producer.setDefaultStream(outputStream);
producer.setDefaultPartition("fakePartition");
DataStream<KafkaEvent> input = env.addSource(consumer).keyBy("word").map(new RollingAdditionMapper());
input.addSink(producer);
env.execute();
}
use of org.apache.flink.streaming.connectors.kinesis.FlinkKinesisConsumer in project flink by apache.
the class GlueSchemaRegistryAvroKinesisITCase method createSource.
private FlinkKinesisConsumer<GenericRecord> createSource() throws Exception {
Properties properties = KINESALITE.getContainerProperties();
properties.setProperty(STREAM_INITIAL_POSITION, ConsumerConfigConstants.InitialPosition.TRIM_HORIZON.name());
return new FlinkKinesisConsumer<>(INPUT_STREAM, GlueSchemaRegistryAvroDeserializationSchema.forGeneric(getSchema(), getConfigs()), properties);
}
Aggregations