use of org.apache.kafka.streams.errors.StreamsException in project kafka by apache.
the class RecordCollectorTest method shouldThrowInformativeStreamsExceptionOnValueClassCastException.
@SuppressWarnings({ "unchecked", "rawtypes" })
@Test
public void shouldThrowInformativeStreamsExceptionOnValueClassCastException() {
final StreamsException expected = assertThrows(StreamsException.class, () -> this.collector.send("topic", "key", "value", new RecordHeaders(), 0, 0L, new StringSerializer(), // need to add cast to trigger `ClassCastException`
(Serializer) new LongSerializer()));
assertThat(expected.getCause(), instanceOf(ClassCastException.class));
assertThat(expected.getMessage(), equalTo("ClassCastException while producing data to topic topic. " + "A serializer (key: org.apache.kafka.common.serialization.StringSerializer / value: org.apache.kafka.common.serialization.LongSerializer) " + "is not compatible to the actual key or value type (key type: java.lang.String / value type: java.lang.String). " + "Change the default Serdes in StreamConfig or provide correct Serdes via method parameters " + "(for example if using the DSL, `#to(String topic, Produced<K, V> produced)` with `Produced.keySerde(WindowedSerdes.timeWindowedSerdeFrom(String.class))`)."));
}
use of org.apache.kafka.streams.errors.StreamsException in project kafka by apache.
the class RecordCollectorTest method shouldThrowIfTopicIsUnknownOnSendWithPartitioner.
@Test
public void shouldThrowIfTopicIsUnknownOnSendWithPartitioner() {
final RecordCollector collector = new RecordCollectorImpl(logContext, taskId, new StreamsProducer(config, processId + "-StreamThread-1", new MockClientSupplier() {
@Override
public Producer<byte[], byte[]> getProducer(final Map<String, Object> config) {
return new MockProducer<byte[], byte[]>(cluster, true, new DefaultPartitioner(), byteArraySerializer, byteArraySerializer) {
@Override
public List<PartitionInfo> partitionsFor(final String topic) {
return Collections.emptyList();
}
};
}
}, null, null, logContext, Time.SYSTEM), productionExceptionHandler, streamsMetrics);
collector.initialize();
final StreamsException thrown = assertThrows(StreamsException.class, () -> collector.send(topic, "3", "0", null, null, stringSerializer, stringSerializer, streamPartitioner));
assertThat(thrown.getMessage(), equalTo("Could not get partition information for topic topic for task 0_0." + " This can happen if the topic does not exist."));
}
use of org.apache.kafka.streams.errors.StreamsException in project kafka by apache.
the class RecordCollectorTest method shouldThrowStreamsExceptionOnSubsequentCloseIfFatalEvenWithContinueExceptionHandler.
@Test
public void shouldThrowStreamsExceptionOnSubsequentCloseIfFatalEvenWithContinueExceptionHandler() {
final KafkaException exception = new AuthenticationException("KABOOM!");
final RecordCollector collector = new RecordCollectorImpl(logContext, taskId, getExceptionalStreamsProducerOnSend(exception), new AlwaysContinueProductionExceptionHandler(), streamsMetrics);
collector.send(topic, "3", "0", null, null, stringSerializer, stringSerializer, streamPartitioner);
final StreamsException thrown = assertThrows(StreamsException.class, collector::closeClean);
assertEquals(exception, thrown.getCause());
assertThat(thrown.getMessage(), equalTo("Error encountered sending record to topic topic for task 0_0 due to:" + "\norg.apache.kafka.common.errors.AuthenticationException: KABOOM!" + "\nWritten offsets would not be recorded and no more records would be sent since this is a fatal error."));
}
use of org.apache.kafka.streams.errors.StreamsException in project kafka by apache.
the class RecordCollectorTest method shouldThrowStreamsExceptionOnSubsequentSendIfFatalEvenWithContinueExceptionHandler.
@Test
public void shouldThrowStreamsExceptionOnSubsequentSendIfFatalEvenWithContinueExceptionHandler() {
final KafkaException exception = new AuthenticationException("KABOOM!");
final RecordCollector collector = new RecordCollectorImpl(logContext, taskId, getExceptionalStreamsProducerOnSend(exception), new AlwaysContinueProductionExceptionHandler(), streamsMetrics);
collector.send(topic, "3", "0", null, null, stringSerializer, stringSerializer, streamPartitioner);
final StreamsException thrown = assertThrows(StreamsException.class, () -> collector.send(topic, "3", "0", null, null, stringSerializer, stringSerializer, streamPartitioner));
assertEquals(exception, thrown.getCause());
assertThat(thrown.getMessage(), equalTo("Error encountered sending record to topic topic for task 0_0 due to:" + "\norg.apache.kafka.common.errors.AuthenticationException: KABOOM!" + "\nWritten offsets would not be recorded and no more records would be sent since this is a fatal error."));
}
use of org.apache.kafka.streams.errors.StreamsException in project kafka by apache.
the class RecordQueueTest method shouldThrowStreamsExceptionWhenValueDeserializationFails.
@Test
public void shouldThrowStreamsExceptionWhenValueDeserializationFails() {
final byte[] value = Serdes.Long().serializer().serialize("foo", 1L);
final List<ConsumerRecord<byte[], byte[]>> records = Collections.singletonList(new ConsumerRecord<>("topic", 1, 1, 0L, TimestampType.CREATE_TIME, 0, 0, recordKey, value, new RecordHeaders(), Optional.empty()));
final StreamsException exception = assertThrows(StreamsException.class, () -> queue.addRawRecords(records));
assertThat(exception.getCause(), instanceOf(SerializationException.class));
}
Aggregations