Search in sources :

Example 6 with Long

use of org.apache.kafka.common.serialization.Serdes.Long in project kafka by apache.

the class KTableSuppressProcessorTest method zeroTimeLimitShouldImmediatelyEmit.

@Test
public void zeroTimeLimitShouldImmediatelyEmit() {
    final Harness<String, Long> harness = new Harness<>(untilTimeLimit(ZERO, unbounded()), String(), Long());
    final MockInternalNewProcessorContext<String, Change<Long>> context = harness.context;
    final long timestamp = ARBITRARY_LONG;
    context.setRecordMetadata("", 0, 0L);
    context.setTimestamp(timestamp);
    final String key = "hey";
    final Change<Long> value = ARBITRARY_CHANGE;
    harness.processor.process(new Record<>(key, value, timestamp));
    assertThat(context.forwarded(), hasSize(1));
    final MockProcessorContext.CapturedForward capturedForward = context.forwarded().get(0);
    assertThat(capturedForward.record(), is(new Record<>(key, value, timestamp)));
}
Also used : Long(org.apache.kafka.common.serialization.Serdes.Long) Record(org.apache.kafka.streams.processor.api.Record) String(org.apache.kafka.common.serialization.Serdes.String) CoreMatchers.containsString(org.hamcrest.CoreMatchers.containsString) Change(org.apache.kafka.streams.kstream.internals.Change) MockProcessorContext(org.apache.kafka.streams.processor.api.MockProcessorContext) Test(org.junit.Test)

Example 7 with Long

use of org.apache.kafka.common.serialization.Serdes.Long in project kafka by apache.

the class KTableSuppressProcessorTest method intermediateSuppressionShouldBufferAndEmitLater.

@Test
public void intermediateSuppressionShouldBufferAndEmitLater() {
    final Harness<String, Long> harness = new Harness<>(untilTimeLimit(ofMillis(1), unbounded()), String(), Long());
    final MockInternalNewProcessorContext<String, Change<Long>> context = harness.context;
    final long timestamp = 0L;
    context.setRecordMetadata("topic", 0, 0);
    context.setTimestamp(timestamp);
    final String key = "hey";
    final Change<Long> value = new Change<>(null, 1L);
    harness.processor.process(new Record<>(key, value, timestamp));
    assertThat(context.forwarded(), hasSize(0));
    context.setRecordMetadata("topic", 0, 1);
    context.setTimestamp(1L);
    harness.processor.process(new Record<>("tick", new Change<>(null, null), 1L));
    assertThat(context.forwarded(), hasSize(1));
    final MockProcessorContext.CapturedForward capturedForward = context.forwarded().get(0);
    assertThat(capturedForward.record(), is(new Record<>(key, value, timestamp)));
}
Also used : Long(org.apache.kafka.common.serialization.Serdes.Long) Record(org.apache.kafka.streams.processor.api.Record) String(org.apache.kafka.common.serialization.Serdes.String) CoreMatchers.containsString(org.hamcrest.CoreMatchers.containsString) Change(org.apache.kafka.streams.kstream.internals.Change) MockProcessorContext(org.apache.kafka.streams.processor.api.MockProcessorContext) Test(org.junit.Test)

Example 8 with Long

use of org.apache.kafka.common.serialization.Serdes.Long in project kafka by apache.

the class KTableSuppressProcessorTest method suppressShouldNotDropTombstonesForKTable.

/**
 * It's SUPER NOT OK to drop tombstones for non-windowed streams, since we may have emitted some results for
 * the key before getting the tombstone (see the {@link SuppressedInternal} javadoc).
 */
@Test
public void suppressShouldNotDropTombstonesForKTable() {
    final Harness<String, Long> harness = new Harness<>(untilTimeLimit(ofMillis(0), maxRecords(0)), String(), Long());
    final MockInternalNewProcessorContext<String, Change<Long>> context = harness.context;
    final long timestamp = 100L;
    context.setRecordMetadata("", 0, 0L);
    context.setTimestamp(timestamp);
    final String key = "hey";
    final Change<Long> value = new Change<>(null, ARBITRARY_LONG);
    harness.processor.process(new Record<>(key, value, timestamp));
    assertThat(context.forwarded(), hasSize(1));
    final MockProcessorContext.CapturedForward capturedForward = context.forwarded().get(0);
    assertThat(capturedForward.record(), is(new Record<>(key, value, timestamp)));
}
Also used : Long(org.apache.kafka.common.serialization.Serdes.Long) Record(org.apache.kafka.streams.processor.api.Record) String(org.apache.kafka.common.serialization.Serdes.String) CoreMatchers.containsString(org.hamcrest.CoreMatchers.containsString) Change(org.apache.kafka.streams.kstream.internals.Change) MockProcessorContext(org.apache.kafka.streams.processor.api.MockProcessorContext) Test(org.junit.Test)

Example 9 with Long

use of org.apache.kafka.common.serialization.Serdes.Long in project kafka by apache.

the class KTableSuppressProcessorTest method suppressShouldShutDownWhenOverByteCapacity.

@Test
public void suppressShouldShutDownWhenOverByteCapacity() {
    final Harness<String, Long> harness = new Harness<>(untilTimeLimit(Duration.ofDays(100), maxBytes(60L).shutDownWhenFull()), String(), Long());
    final MockInternalNewProcessorContext<String, Change<Long>> context = harness.context;
    final long timestamp = 100L;
    context.setRecordMetadata("", 0, 0L);
    context.setTimestamp(timestamp);
    context.setCurrentNode(new ProcessorNode("testNode"));
    final String key = "hey";
    final Change<Long> value = new Change<>(null, ARBITRARY_LONG);
    harness.processor.process(new Record<>(key, value, timestamp));
    context.setRecordMetadata("", 0, 1L);
    context.setTimestamp(1L);
    try {
        harness.processor.process(new Record<>("dummyKey", value, timestamp));
        fail("expected an exception");
    } catch (final StreamsException e) {
        assertThat(e.getMessage(), containsString("buffer exceeded its max capacity"));
    }
}
Also used : ProcessorNode(org.apache.kafka.streams.processor.internals.ProcessorNode) StreamsException(org.apache.kafka.streams.errors.StreamsException) Long(org.apache.kafka.common.serialization.Serdes.Long) String(org.apache.kafka.common.serialization.Serdes.String) CoreMatchers.containsString(org.hamcrest.CoreMatchers.containsString) Change(org.apache.kafka.streams.kstream.internals.Change) Test(org.junit.Test)

Example 10 with Long

use of org.apache.kafka.common.serialization.Serdes.Long in project kafka by apache.

the class KTableSuppressProcessorTest method finalResultsShouldDropTombstonesForSessionWindows.

/**
 * It's desirable to drop tombstones for final-results windowed streams, since (as described in the
 * {@link SuppressedInternal} javadoc), they are unnecessary to emit.
 */
@Test
public void finalResultsShouldDropTombstonesForSessionWindows() {
    final Harness<Windowed<String>, Long> harness = new Harness<>(finalResults(ofMillis(0L)), sessionWindowedSerdeFrom(String.class), Long());
    final MockInternalNewProcessorContext<Windowed<String>, Change<Long>> context = harness.context;
    final long timestamp = 100L;
    context.setRecordMetadata("", 0, 0L);
    context.setTimestamp(timestamp);
    final Windowed<String> key = new Windowed<>("hey", new SessionWindow(0L, 0L));
    final Change<Long> value = new Change<>(null, ARBITRARY_LONG);
    harness.processor.process(new Record<>(key, value, timestamp));
    assertThat(context.forwarded(), hasSize(0));
}
Also used : Windowed(org.apache.kafka.streams.kstream.Windowed) Long(org.apache.kafka.common.serialization.Serdes.Long) String(org.apache.kafka.common.serialization.Serdes.String) CoreMatchers.containsString(org.hamcrest.CoreMatchers.containsString) Change(org.apache.kafka.streams.kstream.internals.Change) SessionWindow(org.apache.kafka.streams.kstream.internals.SessionWindow) Test(org.junit.Test)

Aggregations

Long (org.apache.kafka.common.serialization.Serdes.Long)15 String (org.apache.kafka.common.serialization.Serdes.String)15 Change (org.apache.kafka.streams.kstream.internals.Change)15 CoreMatchers.containsString (org.hamcrest.CoreMatchers.containsString)15 Test (org.junit.Test)15 MockProcessorContext (org.apache.kafka.streams.processor.api.MockProcessorContext)11 Record (org.apache.kafka.streams.processor.api.Record)11 Windowed (org.apache.kafka.streams.kstream.Windowed)8 TimeWindow (org.apache.kafka.streams.kstream.internals.TimeWindow)6 StreamsException (org.apache.kafka.streams.errors.StreamsException)2 SessionWindow (org.apache.kafka.streams.kstream.internals.SessionWindow)2 ProcessorNode (org.apache.kafka.streams.processor.internals.ProcessorNode)2 Headers (org.apache.kafka.common.header.Headers)1 RecordHeaders (org.apache.kafka.common.header.internals.RecordHeaders)1