Search in sources :

Example 6 with Change

use of org.apache.kafka.streams.kstream.internals.Change in project kafka by apache.

the class KTableSuppressProcessorTest method suppressShouldEmitWhenOverByteCapacity.

@Test
public void suppressShouldEmitWhenOverByteCapacity() {
    final Harness<String, Long> harness = new Harness<>(untilTimeLimit(Duration.ofDays(100), maxBytes(60L)), String(), Long());
    final MockInternalNewProcessorContext<String, Change<Long>> context = harness.context;
    final long timestamp = 100L;
    context.setRecordMetadata("", 0, 0L);
    context.setTimestamp(timestamp);
    final String key = "hey";
    final Change<Long> value = new Change<>(null, ARBITRARY_LONG);
    harness.processor.process(new Record<>(key, value, timestamp));
    context.setRecordMetadata("", 0, 1L);
    context.setTimestamp(timestamp + 1);
    harness.processor.process(new Record<>("dummyKey", value, timestamp + 1));
    assertThat(context.forwarded(), hasSize(1));
    final MockProcessorContext.CapturedForward capturedForward = context.forwarded().get(0);
    assertThat(capturedForward.record(), is(new Record<>(key, value, timestamp)));
}
Also used : Long(org.apache.kafka.common.serialization.Serdes.Long) Record(org.apache.kafka.streams.processor.api.Record) String(org.apache.kafka.common.serialization.Serdes.String) CoreMatchers.containsString(org.hamcrest.CoreMatchers.containsString) Change(org.apache.kafka.streams.kstream.internals.Change) MockProcessorContext(org.apache.kafka.streams.processor.api.MockProcessorContext) Test(org.junit.Test)

Example 7 with Change

use of org.apache.kafka.streams.kstream.internals.Change in project kafka by apache.

the class KTableSuppressProcessorTest method zeroTimeLimitShouldImmediatelyEmit.

@Test
public void zeroTimeLimitShouldImmediatelyEmit() {
    final Harness<String, Long> harness = new Harness<>(untilTimeLimit(ZERO, unbounded()), String(), Long());
    final MockInternalNewProcessorContext<String, Change<Long>> context = harness.context;
    final long timestamp = ARBITRARY_LONG;
    context.setRecordMetadata("", 0, 0L);
    context.setTimestamp(timestamp);
    final String key = "hey";
    final Change<Long> value = ARBITRARY_CHANGE;
    harness.processor.process(new Record<>(key, value, timestamp));
    assertThat(context.forwarded(), hasSize(1));
    final MockProcessorContext.CapturedForward capturedForward = context.forwarded().get(0);
    assertThat(capturedForward.record(), is(new Record<>(key, value, timestamp)));
}
Also used : Long(org.apache.kafka.common.serialization.Serdes.Long) Record(org.apache.kafka.streams.processor.api.Record) String(org.apache.kafka.common.serialization.Serdes.String) CoreMatchers.containsString(org.hamcrest.CoreMatchers.containsString) Change(org.apache.kafka.streams.kstream.internals.Change) MockProcessorContext(org.apache.kafka.streams.processor.api.MockProcessorContext) Test(org.junit.Test)

Example 8 with Change

use of org.apache.kafka.streams.kstream.internals.Change in project kafka by apache.

the class KTableSuppressProcessorTest method intermediateSuppressionShouldBufferAndEmitLater.

@Test
public void intermediateSuppressionShouldBufferAndEmitLater() {
    final Harness<String, Long> harness = new Harness<>(untilTimeLimit(ofMillis(1), unbounded()), String(), Long());
    final MockInternalNewProcessorContext<String, Change<Long>> context = harness.context;
    final long timestamp = 0L;
    context.setRecordMetadata("topic", 0, 0);
    context.setTimestamp(timestamp);
    final String key = "hey";
    final Change<Long> value = new Change<>(null, 1L);
    harness.processor.process(new Record<>(key, value, timestamp));
    assertThat(context.forwarded(), hasSize(0));
    context.setRecordMetadata("topic", 0, 1);
    context.setTimestamp(1L);
    harness.processor.process(new Record<>("tick", new Change<>(null, null), 1L));
    assertThat(context.forwarded(), hasSize(1));
    final MockProcessorContext.CapturedForward capturedForward = context.forwarded().get(0);
    assertThat(capturedForward.record(), is(new Record<>(key, value, timestamp)));
}
Also used : Long(org.apache.kafka.common.serialization.Serdes.Long) Record(org.apache.kafka.streams.processor.api.Record) String(org.apache.kafka.common.serialization.Serdes.String) CoreMatchers.containsString(org.hamcrest.CoreMatchers.containsString) Change(org.apache.kafka.streams.kstream.internals.Change) MockProcessorContext(org.apache.kafka.streams.processor.api.MockProcessorContext) Test(org.junit.Test)

Example 9 with Change

use of org.apache.kafka.streams.kstream.internals.Change in project kafka by apache.

the class KTableSuppressProcessorTest method suppressShouldNotDropTombstonesForKTable.

/**
 * It's SUPER NOT OK to drop tombstones for non-windowed streams, since we may have emitted some results for
 * the key before getting the tombstone (see the {@link SuppressedInternal} javadoc).
 */
@Test
public void suppressShouldNotDropTombstonesForKTable() {
    final Harness<String, Long> harness = new Harness<>(untilTimeLimit(ofMillis(0), maxRecords(0)), String(), Long());
    final MockInternalNewProcessorContext<String, Change<Long>> context = harness.context;
    final long timestamp = 100L;
    context.setRecordMetadata("", 0, 0L);
    context.setTimestamp(timestamp);
    final String key = "hey";
    final Change<Long> value = new Change<>(null, ARBITRARY_LONG);
    harness.processor.process(new Record<>(key, value, timestamp));
    assertThat(context.forwarded(), hasSize(1));
    final MockProcessorContext.CapturedForward capturedForward = context.forwarded().get(0);
    assertThat(capturedForward.record(), is(new Record<>(key, value, timestamp)));
}
Also used : Long(org.apache.kafka.common.serialization.Serdes.Long) Record(org.apache.kafka.streams.processor.api.Record) String(org.apache.kafka.common.serialization.Serdes.String) CoreMatchers.containsString(org.hamcrest.CoreMatchers.containsString) Change(org.apache.kafka.streams.kstream.internals.Change) MockProcessorContext(org.apache.kafka.streams.processor.api.MockProcessorContext) Test(org.junit.Test)

Example 10 with Change

use of org.apache.kafka.streams.kstream.internals.Change in project kafka by apache.

the class TimeOrderedKeyValueBufferTest method shouldRestoreV3Format.

@Test
public void shouldRestoreV3Format() {
    final TimeOrderedKeyValueBuffer<String, String> buffer = bufferSupplier.apply(testName);
    final MockInternalProcessorContext context = makeContext();
    buffer.init((StateStoreContext) context, buffer);
    final RecordBatchingStateRestoreCallback stateRestoreCallback = (RecordBatchingStateRestoreCallback) context.stateRestoreCallback(testName);
    context.setRecordContext(new ProcessorRecordContext(0, 0, 0, "", new RecordHeaders()));
    final RecordHeaders headers = new RecordHeaders(new Header[] { new RecordHeader("v", new byte[] { (byte) 3 }) });
    // These serialized formats were captured by running version 2.4 code.
    // They verify that an upgrade from 2.4 will work.
    // Do not change them.
    final String toDeleteBinary = "0000000000000000000000000000000000000005746F70696300000000FFFFFFFFFFFFFFFFFFFFFFFF00000006646F6F6D65640000000000000000";
    final String asdfBinary = "0000000000000001000000000000000000000005746F70696300000000FFFFFFFFFFFFFFFFFFFFFFFF00000004717765720000000000000002";
    final String zxcvBinary1 = "0000000000000002000000000000000000000005746F70696300000000FFFFFFFF0000000870726576696F75730000000749474E4F52454400000005336F34696D0000000000000001";
    final String zxcvBinary2 = "0000000000000003000000000000000000000005746F70696300000000FFFFFFFF0000000870726576696F757300000005336F34696D000000046E6578740000000000000001";
    stateRestoreCallback.restoreBatch(asList(new ConsumerRecord<>("changelog-topic", 0, 0, 999, TimestampType.CREATE_TIME, -1, -1, "todelete".getBytes(UTF_8), hexStringToByteArray(toDeleteBinary), headers, Optional.empty()), new ConsumerRecord<>("changelog-topic", 0, 1, 9999, TimestampType.CREATE_TIME, -1, -1, "asdf".getBytes(UTF_8), hexStringToByteArray(asdfBinary), headers, Optional.empty()), new ConsumerRecord<>("changelog-topic", 0, 2, 99, TimestampType.CREATE_TIME, -1, -1, "zxcv".getBytes(UTF_8), hexStringToByteArray(zxcvBinary1), headers, Optional.empty()), new ConsumerRecord<>("changelog-topic", 0, 2, 100, TimestampType.CREATE_TIME, -1, -1, "zxcv".getBytes(UTF_8), hexStringToByteArray(zxcvBinary2), headers, Optional.empty())));
    assertThat(buffer.numRecords(), is(3));
    assertThat(buffer.minTimestamp(), is(0L));
    assertThat(buffer.bufferSize(), is(142L));
    stateRestoreCallback.restoreBatch(singletonList(new ConsumerRecord<>("changelog-topic", 0, 3, 3, TimestampType.CREATE_TIME, -1, -1, "todelete".getBytes(UTF_8), null, new RecordHeaders(), Optional.empty())));
    assertThat(buffer.numRecords(), is(2));
    assertThat(buffer.minTimestamp(), is(1L));
    assertThat(buffer.bufferSize(), is(95L));
    assertThat(buffer.priorValueForBuffered("todelete"), is(Maybe.undefined()));
    assertThat(buffer.priorValueForBuffered("asdf"), is(Maybe.defined(null)));
    assertThat(buffer.priorValueForBuffered("zxcv"), is(Maybe.defined(ValueAndTimestamp.make("previous", -1))));
    // flush the buffer into a list in buffer order so we can make assertions about the contents.
    final List<Eviction<String, String>> evicted = new LinkedList<>();
    buffer.evictWhile(() -> true, evicted::add);
    // Several things to note:
    // * The buffered records are ordered according to their buffer time (serialized in the value of the changelog)
    // * The record timestamps are properly restored, and not conflated with the record's buffer time.
    // * The keys and values are properly restored
    // * The record topic is set to the original input topic, *not* the changelog topic
    // * The record offset preserves the original input record's offset, *not* the offset of the changelog record
    assertThat(evicted, is(asList(new Eviction<>("zxcv", new Change<>("next", "3o4im"), getContext(3L)), new Eviction<>("asdf", new Change<>("qwer", null), getContext(1L)))));
    cleanup(context, buffer);
}
Also used : Change(org.apache.kafka.streams.kstream.internals.Change) ConsumerRecord(org.apache.kafka.clients.consumer.ConsumerRecord) LinkedList(java.util.LinkedList) RecordBatchingStateRestoreCallback(org.apache.kafka.streams.processor.internals.RecordBatchingStateRestoreCallback) RecordHeaders(org.apache.kafka.common.header.internals.RecordHeaders) ProcessorRecordContext(org.apache.kafka.streams.processor.internals.ProcessorRecordContext) Eviction(org.apache.kafka.streams.state.internals.TimeOrderedKeyValueBuffer.Eviction) MockInternalProcessorContext(org.apache.kafka.test.MockInternalProcessorContext) RecordHeader(org.apache.kafka.common.header.internals.RecordHeader) Test(org.junit.Test)

Aggregations

Change (org.apache.kafka.streams.kstream.internals.Change)28 Test (org.junit.Test)23 Long (org.apache.kafka.common.serialization.Serdes.Long)15 String (org.apache.kafka.common.serialization.Serdes.String)15 CoreMatchers.containsString (org.hamcrest.CoreMatchers.containsString)15 Record (org.apache.kafka.streams.processor.api.Record)12 MockProcessorContext (org.apache.kafka.streams.processor.api.MockProcessorContext)11 ProcessorRecordContext (org.apache.kafka.streams.processor.internals.ProcessorRecordContext)10 Windowed (org.apache.kafka.streams.kstream.Windowed)8 MockInternalProcessorContext (org.apache.kafka.test.MockInternalProcessorContext)7 LinkedList (java.util.LinkedList)6 RecordHeaders (org.apache.kafka.common.header.internals.RecordHeaders)6 TimeWindow (org.apache.kafka.streams.kstream.internals.TimeWindow)6 Eviction (org.apache.kafka.streams.state.internals.TimeOrderedKeyValueBuffer.Eviction)6 ConsumerRecord (org.apache.kafka.clients.consumer.ConsumerRecord)5 RecordBatchingStateRestoreCallback (org.apache.kafka.streams.processor.internals.RecordBatchingStateRestoreCallback)5 RecordHeader (org.apache.kafka.common.header.internals.RecordHeader)4 Bytes (org.apache.kafka.common.utils.Bytes)3 ProcessorNode (org.apache.kafka.streams.processor.internals.ProcessorNode)3 StreamsException (org.apache.kafka.streams.errors.StreamsException)2