Search in sources :

Example 81 with Topology

use of org.apache.kafka.streams.Topology in project brave by openzipkin.

the class ITKafkaStreamsTracing method should_create_spans_from_stream_with_tracing_mapValues_withKey.

@Test
public void should_create_spans_from_stream_with_tracing_mapValues_withKey() {
    String inputTopic = testName.getMethodName() + "-input";
    String outputTopic = testName.getMethodName() + "-output";
    StreamsBuilder builder = new StreamsBuilder();
    builder.stream(inputTopic, Consumed.with(Serdes.String(), Serdes.String())).transformValues(kafkaStreamsTracing.mapValues("mapValue-1", (key, value) -> {
        try {
            Thread.sleep(100L);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        return value;
    })).to(outputTopic, Produced.with(Serdes.String(), Serdes.String()));
    Topology topology = builder.build();
    KafkaStreams streams = buildKafkaStreams(topology);
    send(new ProducerRecord<>(inputTopic, TEST_KEY, TEST_VALUE));
    waitForStreamToRun(streams);
    MutableSpan spanInput = testSpanHandler.takeRemoteSpan(CONSUMER);
    assertThat(spanInput.tags()).containsEntry("kafka.topic", inputTopic);
    MutableSpan spanProcessor = testSpanHandler.takeLocalSpan();
    assertChildOf(spanProcessor, spanInput);
    MutableSpan spanOutput = testSpanHandler.takeRemoteSpan(PRODUCER);
    assertThat(spanOutput.tags()).containsEntry("kafka.topic", outputTopic);
    assertChildOf(spanOutput, spanProcessor);
    streams.close();
    streams.cleanUp();
}
Also used : StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) KafkaStreams(org.apache.kafka.streams.KafkaStreams) MutableSpan(brave.handler.MutableSpan) Topology(org.apache.kafka.streams.Topology) Test(org.junit.Test)

Example 82 with Topology

use of org.apache.kafka.streams.Topology in project brave by openzipkin.

the class ITKafkaStreamsTracing method should_create_spans_from_stream_with_tracing_mark.

@Test
public void should_create_spans_from_stream_with_tracing_mark() {
    String inputTopic = testName.getMethodName() + "-input";
    String outputTopic = testName.getMethodName() + "-output";
    StreamsBuilder builder = new StreamsBuilder();
    builder.stream(inputTopic, Consumed.with(Serdes.String(), Serdes.String())).transformValues(kafkaStreamsTracing.mark("mark-1")).to(outputTopic, Produced.with(Serdes.String(), Serdes.String()));
    Topology topology = builder.build();
    KafkaStreams streams = buildKafkaStreams(topology);
    send(new ProducerRecord<>(inputTopic, TEST_KEY, TEST_VALUE));
    waitForStreamToRun(streams);
    MutableSpan spanInput = testSpanHandler.takeRemoteSpan(CONSUMER);
    assertThat(spanInput.tags()).containsEntry("kafka.topic", inputTopic);
    MutableSpan spanProcessor = testSpanHandler.takeLocalSpan();
    assertChildOf(spanProcessor, spanInput);
    MutableSpan spanOutput = testSpanHandler.takeRemoteSpan(PRODUCER);
    assertThat(spanOutput.tags()).containsEntry("kafka.topic", outputTopic);
    assertChildOf(spanOutput, spanProcessor);
    streams.close();
    streams.cleanUp();
}
Also used : StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) KafkaStreams(org.apache.kafka.streams.KafkaStreams) MutableSpan(brave.handler.MutableSpan) Topology(org.apache.kafka.streams.Topology) Test(org.junit.Test)

Example 83 with Topology

use of org.apache.kafka.streams.Topology in project brave by openzipkin.

the class ITKafkaStreamsTracing method should_create_spans_from_stream_with_tracing_mark_as_not_filtered_predicate_true.

@Test
public void should_create_spans_from_stream_with_tracing_mark_as_not_filtered_predicate_true() {
    String inputTopic = testName.getMethodName() + "-input";
    String outputTopic = testName.getMethodName() + "-output";
    StreamsBuilder builder = new StreamsBuilder();
    builder.stream(inputTopic, Consumed.with(Serdes.String(), Serdes.String())).transformValues(kafkaStreamsTracing.markAsNotFiltered("filterNot-1", (key, value) -> true)).filterNot((k, v) -> Objects.isNull(v)).to(outputTopic, Produced.with(Serdes.String(), Serdes.String()));
    Topology topology = builder.build();
    KafkaStreams streams = buildKafkaStreams(topology);
    send(new ProducerRecord<>(inputTopic, TEST_KEY, TEST_VALUE));
    waitForStreamToRun(streams);
    MutableSpan spanInput = testSpanHandler.takeRemoteSpan(CONSUMER);
    assertThat(spanInput.tags()).containsEntry("kafka.topic", inputTopic);
    MutableSpan spanProcessor = testSpanHandler.takeLocalSpan();
    assertChildOf(spanProcessor, spanInput);
    assertThat(spanProcessor.tags()).containsEntry(KAFKA_STREAMS_FILTERED_TAG, "true");
    // the filterNot transformer returns true so record is dropped
    streams.close();
    streams.cleanUp();
}
Also used : StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) StreamsConfig(org.apache.kafka.streams.StreamsConfig) TEST_KEY(brave.kafka.streams.KafkaStreamsTracingTest.TEST_KEY) Arrays(java.util.Arrays) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) ValueTransformerSupplier(org.apache.kafka.streams.kstream.ValueTransformerSupplier) Produced(org.apache.kafka.streams.kstream.Produced) Assertions.assertThat(org.assertj.core.api.Assertions.assertThat) ProcessorSupplier(org.apache.kafka.streams.processor.ProcessorSupplier) ArrayList(java.util.ArrayList) TransformerSupplier(org.apache.kafka.streams.kstream.TransformerSupplier) KafkaJunitRule(com.github.charithe.kafka.KafkaJunitRule) After(org.junit.After) Serdes(org.apache.kafka.common.serialization.Serdes) ValueTransformer(org.apache.kafka.streams.kstream.ValueTransformer) KafkaTracing(brave.kafka.clients.KafkaTracing) ClassRule(org.junit.ClassRule) PRODUCER(brave.Span.Kind.PRODUCER) Consumer(org.apache.kafka.clients.consumer.Consumer) CommonClientConfigs(org.apache.kafka.clients.CommonClientConfigs) TopicPartition(org.apache.kafka.common.TopicPartition) StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) Properties(java.util.Properties) Producer(org.apache.kafka.clients.producer.Producer) Consumed(org.apache.kafka.streams.kstream.Consumed) Transformer(org.apache.kafka.streams.kstream.Transformer) KeyValue(org.apache.kafka.streams.KeyValue) ConsumerConfig(org.apache.kafka.clients.consumer.ConsumerConfig) Test(org.junit.Test) MessagingTracing(brave.messaging.MessagingTracing) TEST_VALUE(brave.kafka.streams.KafkaStreamsTracingTest.TEST_VALUE) TraceContext(brave.propagation.TraceContext) ValueTransformerWithKeySupplier(org.apache.kafka.streams.kstream.ValueTransformerWithKeySupplier) Assertions.entry(org.assertj.core.api.Assertions.entry) FileNotFoundException(java.io.FileNotFoundException) EphemeralKafkaBroker(com.github.charithe.kafka.EphemeralKafkaBroker) Objects(java.util.Objects) ProcessorContext(org.apache.kafka.streams.processor.ProcessorContext) List(java.util.List) MutableSpan(brave.handler.MutableSpan) KAFKA_STREAMS_FILTERED_TAG(brave.kafka.streams.KafkaStreamsTags.KAFKA_STREAMS_FILTERED_TAG) ValueTransformerWithKey(org.apache.kafka.streams.kstream.ValueTransformerWithKey) KafkaStreams(org.apache.kafka.streams.KafkaStreams) AbstractProcessor(org.apache.kafka.streams.processor.AbstractProcessor) CONSUMER(brave.Span.Kind.CONSUMER) Topology(org.apache.kafka.streams.Topology) KafkaConsumer(org.apache.kafka.clients.consumer.KafkaConsumer) KafkaStreams(org.apache.kafka.streams.KafkaStreams) MutableSpan(brave.handler.MutableSpan) Topology(org.apache.kafka.streams.Topology) Test(org.junit.Test)

Example 84 with Topology

use of org.apache.kafka.streams.Topology in project brave by openzipkin.

the class ITKafkaStreamsTracing method should_create_one_span_from_stream_input_topic_whenSharingEnabled.

@Test
public void should_create_one_span_from_stream_input_topic_whenSharingEnabled() {
    String inputTopic = testName.getMethodName() + "-input";
    StreamsBuilder builder = new StreamsBuilder();
    builder.stream(inputTopic).foreach((k, v) -> {
    });
    Topology topology = builder.build();
    MessagingTracing messagingTracing = MessagingTracing.create(tracing);
    KafkaStreamsTracing kafkaStreamsTracing = KafkaStreamsTracing.newBuilder(messagingTracing).singleRootSpanOnReceiveBatch(true).build();
    KafkaStreams streams = kafkaStreamsTracing.kafkaStreams(topology, streamsProperties());
    send(new ProducerRecord<>(inputTopic, TEST_KEY, TEST_VALUE));
    send(new ProducerRecord<>(inputTopic, TEST_KEY, TEST_VALUE));
    send(new ProducerRecord<>(inputTopic, TEST_KEY, TEST_VALUE));
    waitForStreamToRun(streams);
    MutableSpan spanInput = testSpanHandler.takeRemoteSpan(CONSUMER);
    assertThat(spanInput.tags()).containsEntry("kafka.topic", inputTopic);
    streams.close();
    streams.cleanUp();
}
Also used : StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) KafkaStreams(org.apache.kafka.streams.KafkaStreams) MutableSpan(brave.handler.MutableSpan) MessagingTracing(brave.messaging.MessagingTracing) Topology(org.apache.kafka.streams.Topology) Test(org.junit.Test)

Example 85 with Topology

use of org.apache.kafka.streams.Topology in project apache-kafka-on-k8s by banzaicloud.

the class WordCountProcessorDemo method main.

public static void main(String[] args) throws Exception {
    Properties props = new Properties();
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-wordcount-processor");
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
    props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
    props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
    props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
    // setting offset reset to earliest so that we can re-run the demo code with the same pre-loaded data
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
    Topology builder = new Topology();
    builder.addSource("Source", "streams-plaintext-input");
    builder.addProcessor("Process", new MyProcessorSupplier(), "Source");
    builder.addStateStore(Stores.keyValueStoreBuilder(Stores.inMemoryKeyValueStore("Counts"), Serdes.String(), Serdes.Integer()), "Process");
    builder.addSink("Sink", "streams-wordcount-processor-output", "Process");
    final KafkaStreams streams = new KafkaStreams(builder, props);
    final CountDownLatch latch = new CountDownLatch(1);
    // attach shutdown handler to catch control-c
    Runtime.getRuntime().addShutdownHook(new Thread("streams-wordcount-shutdown-hook") {

        @Override
        public void run() {
            streams.close();
            latch.countDown();
        }
    });
    try {
        streams.start();
        latch.await();
    } catch (Throwable e) {
        System.exit(1);
    }
    System.exit(0);
}
Also used : KafkaStreams(org.apache.kafka.streams.KafkaStreams) Topology(org.apache.kafka.streams.Topology) Properties(java.util.Properties) CountDownLatch(java.util.concurrent.CountDownLatch)

Aggregations

Topology (org.apache.kafka.streams.Topology)127 Test (org.junit.Test)106 StreamsBuilder (org.apache.kafka.streams.StreamsBuilder)93 KafkaStreams (org.apache.kafka.streams.KafkaStreams)53 TopologyTestDriver (org.apache.kafka.streams.TopologyTestDriver)53 Properties (java.util.Properties)47 StringSerializer (org.apache.kafka.common.serialization.StringSerializer)46 KeyValue (org.apache.kafka.streams.KeyValue)40 Serdes (org.apache.kafka.common.serialization.Serdes)39 StreamsConfig (org.apache.kafka.streams.StreamsConfig)33 List (java.util.List)29 MutableSpan (brave.handler.MutableSpan)28 Consumed (org.apache.kafka.streams.kstream.Consumed)28 Produced (org.apache.kafka.streams.kstream.Produced)26 Arrays (java.util.Arrays)25 StringDeserializer (org.apache.kafka.common.serialization.StringDeserializer)25 ArrayList (java.util.ArrayList)23 ProcessorContext (org.apache.kafka.streams.processor.ProcessorContext)23 Duration (java.time.Duration)22 KStream (org.apache.kafka.streams.kstream.KStream)22