Search in sources :

Example 1 with MockProcessorContext

use of org.apache.kafka.streams.processor.api.MockProcessorContext in project kafka by apache.

the class KTableKTableInnerJoinTest method shouldLogAndMeterSkippedRecordsDueToNullLeftKey.

@Test
public void shouldLogAndMeterSkippedRecordsDueToNullLeftKey() {
    final StreamsBuilder builder = new StreamsBuilder();
    @SuppressWarnings("unchecked") final Processor<String, Change<String>, String, Change<Object>> join = new KTableKTableInnerJoin<>((KTableImpl<String, String, String>) builder.table("left", Consumed.with(Serdes.String(), Serdes.String())), (KTableImpl<String, String, String>) builder.table("right", Consumed.with(Serdes.String(), Serdes.String())), null).get();
    final MockProcessorContext<String, Change<Object>> context = new MockProcessorContext<>(props);
    context.setRecordMetadata("left", -1, -2);
    join.init(context);
    try (final LogCaptureAppender appender = LogCaptureAppender.createAndRegister(KTableKTableInnerJoin.class)) {
        join.process(new Record<>(null, new Change<>("new", "old"), 0));
        assertThat(appender.getMessages(), hasItem("Skipping record due to null key. topic=[left] partition=[-1] offset=[-2]"));
    }
}
Also used : StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) LogCaptureAppender(org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender) MockProcessorContext(org.apache.kafka.streams.processor.api.MockProcessorContext) Test(org.junit.Test)

Example 2 with MockProcessorContext

use of org.apache.kafka.streams.processor.api.MockProcessorContext in project kafka by apache.

the class KTableKTableOuterJoinTest method shouldLogAndMeterSkippedRecordsDueToNullLeftKey.

@Test
public void shouldLogAndMeterSkippedRecordsDueToNullLeftKey() {
    final StreamsBuilder builder = new StreamsBuilder();
    @SuppressWarnings("unchecked") final Processor<String, Change<String>, String, Change<Object>> join = new KTableKTableOuterJoin<>((KTableImpl<String, String, String>) builder.table("left", Consumed.with(Serdes.String(), Serdes.String())), (KTableImpl<String, String, String>) builder.table("right", Consumed.with(Serdes.String(), Serdes.String())), null).get();
    final MockProcessorContext<String, Change<Object>> context = new MockProcessorContext<>(props);
    context.setRecordMetadata("left", -1, -2);
    join.init(context);
    try (final LogCaptureAppender appender = LogCaptureAppender.createAndRegister(KTableKTableOuterJoin.class)) {
        join.process(new Record<>(null, new Change<>("new", "old"), 0));
        assertThat(appender.getMessages(), hasItem("Skipping record due to null key. topic=[left] partition=[-1] offset=[-2]"));
    }
}
Also used : StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) LogCaptureAppender(org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender) MockProcessorContext(org.apache.kafka.streams.processor.api.MockProcessorContext) Test(org.junit.Test)

Example 3 with MockProcessorContext

use of org.apache.kafka.streams.processor.api.MockProcessorContext in project kafka by apache.

the class WindowedWordCountProcessorTest method shouldWorkWithPersistentStore.

@Test
public void shouldWorkWithPersistentStore() throws IOException {
    final File stateDir = TestUtils.tempDirectory();
    try {
        final MockProcessorContext<String, String> context = new MockProcessorContext<>(new Properties(), new TaskId(0, 0), stateDir);
        // Create, initialize, and register the state store.
        final WindowStore<String, Integer> store = Stores.windowStoreBuilder(Stores.persistentWindowStore("WindowedCounts", Duration.ofDays(24), Duration.ofMillis(100), false), Serdes.String(), Serdes.Integer()).withLoggingDisabled().withCachingDisabled().build();
        store.init(context.getStateStoreContext(), store);
        context.getStateStoreContext().register(store, null);
        // Create and initialize the processor under test
        final Processor<String, String, String, String> processor = new WindowedWordCountProcessorSupplier().get();
        processor.init(context);
        // send a record to the processor
        processor.process(new Record<>("key", "alpha beta gamma alpha", 101L));
        // send a record to the processor in a new window
        processor.process(new Record<>("key", "gamma delta", 221L));
        // note that the processor does not forward during process()
        assertThat(context.forwarded().isEmpty(), is(true));
        // now, we trigger the punctuator, which iterates over the state store and forwards the contents.
        context.scheduledPunctuators().get(0).getPunctuator().punctuate(1_000L);
        // finally, we can verify the output.
        final List<CapturedForward<? extends String, ? extends String>> capturedForwards = context.forwarded();
        final List<CapturedForward<? extends String, ? extends String>> expected = asList(new CapturedForward<>(new Record<>("[alpha@100/200]", "2", 1_000L)), new CapturedForward<>(new Record<>("[beta@100/200]", "1", 1_000L)), new CapturedForward<>(new Record<>("[delta@200/300]", "1", 1_000L)), new CapturedForward<>(new Record<>("[gamma@100/200]", "1", 1_000L)), new CapturedForward<>(new Record<>("[gamma@200/300]", "1", 1_000L)));
        assertThat(capturedForwards, is(expected));
        store.close();
    } finally {
        Utils.delete(stateDir);
    }
}
Also used : TaskId(org.apache.kafka.streams.processor.TaskId) Properties(java.util.Properties) MockProcessorContext(org.apache.kafka.streams.processor.api.MockProcessorContext) CapturedForward(org.apache.kafka.streams.processor.api.MockProcessorContext.CapturedForward) Record(org.apache.kafka.streams.processor.api.Record) File(java.io.File) Test(org.junit.jupiter.api.Test)

Example 4 with MockProcessorContext

use of org.apache.kafka.streams.processor.api.MockProcessorContext in project kafka by apache.

the class KTableKTableLeftJoinTest method shouldLogAndMeterSkippedRecordsDueToNullLeftKey.

@Test
public void shouldLogAndMeterSkippedRecordsDueToNullLeftKey() {
    final StreamsBuilder builder = new StreamsBuilder();
    @SuppressWarnings("unchecked") final Processor<String, Change<String>, String, Change<Object>> join = new KTableKTableLeftJoin<>((KTableImpl<String, String, String>) builder.table("left", Consumed.with(Serdes.String(), Serdes.String())), (KTableImpl<String, String, String>) builder.table("right", Consumed.with(Serdes.String(), Serdes.String())), null).get();
    final MockProcessorContext<String, Change<Object>> context = new MockProcessorContext<>(props);
    context.setRecordMetadata("left", -1, -2);
    join.init(context);
    try (final LogCaptureAppender appender = LogCaptureAppender.createAndRegister(KTableKTableLeftJoin.class)) {
        join.process(new Record<>(null, new Change<>("new", "old"), 0));
        assertThat(appender.getMessages(), hasItem("Skipping record due to null key. topic=[left] partition=[-1] offset=[-2]"));
    }
}
Also used : StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) LogCaptureAppender(org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender) MockProcessorContext(org.apache.kafka.streams.processor.api.MockProcessorContext) Test(org.junit.Test)

Example 5 with MockProcessorContext

use of org.apache.kafka.streams.processor.api.MockProcessorContext in project kafka by apache.

the class KTableKTableRightJoinTest method shouldLogAndMeterSkippedRecordsDueToNullLeftKeyWithBuiltInMetricsVersionLatest.

@Test
public void shouldLogAndMeterSkippedRecordsDueToNullLeftKeyWithBuiltInMetricsVersionLatest() {
    final StreamsBuilder builder = new StreamsBuilder();
    @SuppressWarnings("unchecked") final Processor<String, Change<String>, String, Change<Object>> join = new KTableKTableRightJoin<>((KTableImpl<String, String, String>) builder.table("left", Consumed.with(Serdes.String(), Serdes.String())), (KTableImpl<String, String, String>) builder.table("right", Consumed.with(Serdes.String(), Serdes.String())), null).get();
    props.setProperty(StreamsConfig.BUILT_IN_METRICS_VERSION_CONFIG, StreamsConfig.METRICS_LATEST);
    final MockProcessorContext<String, Change<Object>> context = new MockProcessorContext<>(props);
    context.setRecordMetadata("left", -1, -2);
    join.init(context);
    try (final LogCaptureAppender appender = LogCaptureAppender.createAndRegister(KTableKTableRightJoin.class)) {
        join.process(new Record<>(null, new Change<>("new", "old"), 0));
        assertThat(appender.getEvents().stream().filter(e -> e.getLevel().equals("WARN")).map(Event::getMessage).collect(Collectors.toList()), hasItem("Skipping record due to null key. topic=[left] partition=[-1] offset=[-2]"));
    }
}
Also used : StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) StreamsConfig(org.apache.kafka.streams.StreamsConfig) CoreMatchers.hasItem(org.hamcrest.CoreMatchers.hasItem) Event(org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender.Event) Properties(java.util.Properties) Consumed(org.apache.kafka.streams.kstream.Consumed) Test(org.junit.Test) MockProcessorContext(org.apache.kafka.streams.processor.api.MockProcessorContext) Collectors(java.util.stream.Collectors) LogCaptureAppender(org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender) Serdes(org.apache.kafka.common.serialization.Serdes) Record(org.apache.kafka.streams.processor.api.Record) Processor(org.apache.kafka.streams.processor.api.Processor) StreamsTestUtils(org.apache.kafka.test.StreamsTestUtils) MatcherAssert.assertThat(org.hamcrest.MatcherAssert.assertThat) MockProcessorContext(org.apache.kafka.streams.processor.api.MockProcessorContext) StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) LogCaptureAppender(org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender) Event(org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender.Event) Test(org.junit.Test)

Aggregations

MockProcessorContext (org.apache.kafka.streams.processor.api.MockProcessorContext)10 Record (org.apache.kafka.streams.processor.api.Record)5 Test (org.junit.jupiter.api.Test)5 Properties (java.util.Properties)4 StreamsBuilder (org.apache.kafka.streams.StreamsBuilder)4 TaskId (org.apache.kafka.streams.processor.TaskId)4 LogCaptureAppender (org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender)4 Test (org.junit.Test)4 File (java.io.File)3 Processor (org.apache.kafka.streams.processor.api.Processor)3 Utils.mkProperties (org.apache.kafka.common.utils.Utils.mkProperties)2 Punctuator (org.apache.kafka.streams.processor.Punctuator)2 StateStore (org.apache.kafka.streams.processor.StateStore)2 CapturedForward (org.apache.kafka.streams.processor.api.MockProcessorContext.CapturedForward)2 ProcessorContext (org.apache.kafka.streams.processor.api.ProcessorContext)2 IOException (java.io.IOException)1 Duration (java.time.Duration)1 Arrays.asList (java.util.Arrays.asList)1 Collections.singletonList (java.util.Collections.singletonList)1 List (java.util.List)1