Search in sources :

Example 1 with DefaultProductionExceptionHandler

use of org.apache.kafka.streams.errors.DefaultProductionExceptionHandler in project apache-kafka-on-k8s by banzaicloud.

the class ProcessorNodeTest method testMetrics.

@Test
public void testMetrics() {
    final StateSerdes anyStateSerde = StateSerdes.withBuiltinTypes("anyName", Bytes.class, Bytes.class);
    final Metrics metrics = new Metrics();
    final InternalMockProcessorContext context = new InternalMockProcessorContext(anyStateSerde, new RecordCollectorImpl(null, null, new LogContext("processnode-test "), new DefaultProductionExceptionHandler()), metrics);
    final ProcessorNode node = new ProcessorNode("name", new NoOpProcessor(), Collections.emptySet());
    node.init(context);
    String[] latencyOperations = { "process", "punctuate", "create", "destroy" };
    String throughputOperation = "forward";
    String groupName = "stream-processor-node-metrics";
    final Map<String, String> metricTags = new LinkedHashMap<>();
    metricTags.put("processor-node-id", node.name());
    metricTags.put("task-id", context.taskId().toString());
    for (String operation : latencyOperations) {
        assertNotNull(metrics.getSensor(operation));
    }
    assertNotNull(metrics.getSensor(throughputOperation));
    for (String opName : latencyOperations) {
        testSpecificMetrics(metrics, groupName, opName, metricTags);
    }
    assertNotNull(metrics.metrics().get(metrics.metricName(throughputOperation + "-rate", groupName, "The average number of occurrence of " + throughputOperation + " operation per second.", metricTags)));
    // test "all"
    metricTags.put("processor-node-id", "all");
    for (String opName : latencyOperations) {
        testSpecificMetrics(metrics, groupName, opName, metricTags);
    }
    assertNotNull(metrics.metrics().get(metrics.metricName(throughputOperation + "-rate", groupName, "The average number of occurrence of " + throughputOperation + " operation per second.", metricTags)));
    context.close();
}
Also used : Metrics(org.apache.kafka.common.metrics.Metrics) DefaultProductionExceptionHandler(org.apache.kafka.streams.errors.DefaultProductionExceptionHandler) LogContext(org.apache.kafka.common.utils.LogContext) StateSerdes(org.apache.kafka.streams.state.StateSerdes) InternalMockProcessorContext(org.apache.kafka.test.InternalMockProcessorContext) LinkedHashMap(java.util.LinkedHashMap) Test(org.junit.Test)

Example 2 with DefaultProductionExceptionHandler

use of org.apache.kafka.streams.errors.DefaultProductionExceptionHandler in project apache-kafka-on-k8s by banzaicloud.

the class RecordCollectorTest method shouldThrowStreamsExceptionOnSubsequentCallIfASendFailsWithDefaultExceptionHandler.

@SuppressWarnings("unchecked")
@Test
public void shouldThrowStreamsExceptionOnSubsequentCallIfASendFailsWithDefaultExceptionHandler() {
    final RecordCollector collector = new RecordCollectorImpl(new MockProducer(cluster, true, new DefaultPartitioner(), byteArraySerializer, byteArraySerializer) {

        @Override
        public synchronized Future<RecordMetadata> send(final ProducerRecord record, final Callback callback) {
            callback.onCompletion(null, new Exception());
            return null;
        }
    }, "test", logContext, new DefaultProductionExceptionHandler());
    collector.send("topic1", "3", "0", null, stringSerializer, stringSerializer, streamPartitioner);
    try {
        collector.send("topic1", "3", "0", null, stringSerializer, stringSerializer, streamPartitioner);
        fail("Should have thrown StreamsException");
    } catch (final StreamsException expected) {
    /* ok */
    }
}
Also used : MockProducer(org.apache.kafka.clients.producer.MockProducer) Callback(org.apache.kafka.clients.producer.Callback) DefaultPartitioner(org.apache.kafka.clients.producer.internals.DefaultPartitioner) DefaultProductionExceptionHandler(org.apache.kafka.streams.errors.DefaultProductionExceptionHandler) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) StreamsException(org.apache.kafka.streams.errors.StreamsException) Future(java.util.concurrent.Future) KafkaException(org.apache.kafka.common.KafkaException) StreamsException(org.apache.kafka.streams.errors.StreamsException) Test(org.junit.Test)

Example 3 with DefaultProductionExceptionHandler

use of org.apache.kafka.streams.errors.DefaultProductionExceptionHandler in project apache-kafka-on-k8s by banzaicloud.

the class RecordCollectorTest method shouldThrowStreamsExceptionOnFlushIfASendFailedWithDefaultExceptionHandler.

@SuppressWarnings("unchecked")
@Test
public void shouldThrowStreamsExceptionOnFlushIfASendFailedWithDefaultExceptionHandler() {
    final RecordCollector collector = new RecordCollectorImpl(new MockProducer(cluster, true, new DefaultPartitioner(), byteArraySerializer, byteArraySerializer) {

        @Override
        public synchronized Future<RecordMetadata> send(final ProducerRecord record, final Callback callback) {
            callback.onCompletion(null, new Exception());
            return null;
        }
    }, "test", logContext, new DefaultProductionExceptionHandler());
    collector.send("topic1", "3", "0", null, stringSerializer, stringSerializer, streamPartitioner);
    try {
        collector.flush();
        fail("Should have thrown StreamsException");
    } catch (final StreamsException expected) {
    /* ok */
    }
}
Also used : MockProducer(org.apache.kafka.clients.producer.MockProducer) Callback(org.apache.kafka.clients.producer.Callback) DefaultPartitioner(org.apache.kafka.clients.producer.internals.DefaultPartitioner) DefaultProductionExceptionHandler(org.apache.kafka.streams.errors.DefaultProductionExceptionHandler) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) StreamsException(org.apache.kafka.streams.errors.StreamsException) Future(java.util.concurrent.Future) KafkaException(org.apache.kafka.common.KafkaException) StreamsException(org.apache.kafka.streams.errors.StreamsException) Test(org.junit.Test)

Example 4 with DefaultProductionExceptionHandler

use of org.apache.kafka.streams.errors.DefaultProductionExceptionHandler in project apache-kafka-on-k8s by banzaicloud.

the class RecordCollectorTest method shouldThrowStreamsExceptionOnCloseIfASendFailedWithDefaultExceptionHandler.

@SuppressWarnings("unchecked")
@Test
public void shouldThrowStreamsExceptionOnCloseIfASendFailedWithDefaultExceptionHandler() {
    final RecordCollector collector = new RecordCollectorImpl(new MockProducer(cluster, true, new DefaultPartitioner(), byteArraySerializer, byteArraySerializer) {

        @Override
        public synchronized Future<RecordMetadata> send(final ProducerRecord record, final Callback callback) {
            callback.onCompletion(null, new Exception());
            return null;
        }
    }, "test", logContext, new DefaultProductionExceptionHandler());
    collector.send("topic1", "3", "0", null, stringSerializer, stringSerializer, streamPartitioner);
    try {
        collector.close();
        fail("Should have thrown StreamsException");
    } catch (final StreamsException expected) {
    /* ok */
    }
}
Also used : MockProducer(org.apache.kafka.clients.producer.MockProducer) Callback(org.apache.kafka.clients.producer.Callback) DefaultPartitioner(org.apache.kafka.clients.producer.internals.DefaultPartitioner) DefaultProductionExceptionHandler(org.apache.kafka.streams.errors.DefaultProductionExceptionHandler) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) StreamsException(org.apache.kafka.streams.errors.StreamsException) Future(java.util.concurrent.Future) KafkaException(org.apache.kafka.common.KafkaException) StreamsException(org.apache.kafka.streams.errors.StreamsException) Test(org.junit.Test)

Example 5 with DefaultProductionExceptionHandler

use of org.apache.kafka.streams.errors.DefaultProductionExceptionHandler in project apache-kafka-on-k8s by banzaicloud.

the class RecordCollectorTest method shouldThrowStreamsExceptionOnAnyExceptionButProducerFencedException.

@SuppressWarnings("unchecked")
@Test(expected = StreamsException.class)
public void shouldThrowStreamsExceptionOnAnyExceptionButProducerFencedException() {
    final RecordCollector collector = new RecordCollectorImpl(new MockProducer(cluster, true, new DefaultPartitioner(), byteArraySerializer, byteArraySerializer) {

        @Override
        public synchronized Future<RecordMetadata> send(final ProducerRecord record, final Callback callback) {
            throw new KafkaException();
        }
    }, "test", logContext, new DefaultProductionExceptionHandler());
    collector.send("topic1", "3", "0", null, stringSerializer, stringSerializer, streamPartitioner);
}
Also used : MockProducer(org.apache.kafka.clients.producer.MockProducer) Callback(org.apache.kafka.clients.producer.Callback) DefaultPartitioner(org.apache.kafka.clients.producer.internals.DefaultPartitioner) DefaultProductionExceptionHandler(org.apache.kafka.streams.errors.DefaultProductionExceptionHandler) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) Future(java.util.concurrent.Future) KafkaException(org.apache.kafka.common.KafkaException) Test(org.junit.Test)

Aggregations

DefaultProductionExceptionHandler (org.apache.kafka.streams.errors.DefaultProductionExceptionHandler)8 Test (org.junit.Test)8 DefaultPartitioner (org.apache.kafka.clients.producer.internals.DefaultPartitioner)7 MockProducer (org.apache.kafka.clients.producer.MockProducer)5 Future (java.util.concurrent.Future)4 Callback (org.apache.kafka.clients.producer.Callback)4 ProducerRecord (org.apache.kafka.clients.producer.ProducerRecord)4 KafkaException (org.apache.kafka.common.KafkaException)4 LogContext (org.apache.kafka.common.utils.LogContext)3 StreamsException (org.apache.kafka.streams.errors.StreamsException)3 TopicPartition (org.apache.kafka.common.TopicPartition)2 LinkedHashMap (java.util.LinkedHashMap)1 List (java.util.List)1 Metrics (org.apache.kafka.common.metrics.Metrics)1 StateSerdes (org.apache.kafka.streams.state.StateSerdes)1 InternalMockProcessorContext (org.apache.kafka.test.InternalMockProcessorContext)1