Search in sources :

Example 6 with DefaultPartitioner

use of org.apache.kafka.clients.producer.internals.DefaultPartitioner in project kafka by apache.

the class RecordCollectorTest method shouldThrowStreamsExceptionOnFlushIfASendFailed.

@SuppressWarnings("unchecked")
@Test(expected = StreamsException.class)
public void shouldThrowStreamsExceptionOnFlushIfASendFailed() throws Exception {
    final RecordCollector collector = new RecordCollectorImpl(new MockProducer(cluster, true, new DefaultPartitioner(), byteArraySerializer, byteArraySerializer) {

        @Override
        public synchronized Future<RecordMetadata> send(final ProducerRecord record, final Callback callback) {
            callback.onCompletion(null, new Exception());
            return null;
        }
    }, "test");
    collector.send("topic1", "3", "0", null, null, stringSerializer, stringSerializer, streamPartitioner);
    collector.flush();
}
Also used : MockProducer(org.apache.kafka.clients.producer.MockProducer) Callback(org.apache.kafka.clients.producer.Callback) DefaultPartitioner(org.apache.kafka.clients.producer.internals.DefaultPartitioner) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) Future(java.util.concurrent.Future) TimeoutException(org.apache.kafka.common.errors.TimeoutException) StreamsException(org.apache.kafka.streams.errors.StreamsException) Test(org.junit.Test)

Example 7 with DefaultPartitioner

use of org.apache.kafka.clients.producer.internals.DefaultPartitioner in project kafka by apache.

the class RecordCollectorTest method shouldRetryWhenTimeoutExceptionOccursOnSend.

@SuppressWarnings("unchecked")
@Test
public void shouldRetryWhenTimeoutExceptionOccursOnSend() throws Exception {
    final AtomicInteger attempt = new AtomicInteger(0);
    RecordCollectorImpl collector = new RecordCollectorImpl(new MockProducer(cluster, true, new DefaultPartitioner(), byteArraySerializer, byteArraySerializer) {

        @Override
        public synchronized Future<RecordMetadata> send(final ProducerRecord record, final Callback callback) {
            if (attempt.getAndIncrement() == 0) {
                throw new TimeoutException();
            }
            return super.send(record, callback);
        }
    }, "test");
    collector.send("topic1", "3", "0", null, null, stringSerializer, stringSerializer, streamPartitioner);
    final Long offset = collector.offsets().get(new TopicPartition("topic1", 0));
    assertEquals(Long.valueOf(0L), offset);
}
Also used : MockProducer(org.apache.kafka.clients.producer.MockProducer) Callback(org.apache.kafka.clients.producer.Callback) DefaultPartitioner(org.apache.kafka.clients.producer.internals.DefaultPartitioner) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) TopicPartition(org.apache.kafka.common.TopicPartition) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) Future(java.util.concurrent.Future) TimeoutException(org.apache.kafka.common.errors.TimeoutException) Test(org.junit.Test)

Aggregations

DefaultPartitioner (org.apache.kafka.clients.producer.internals.DefaultPartitioner)7 Test (org.junit.Test)7 Future (java.util.concurrent.Future)5 Callback (org.apache.kafka.clients.producer.Callback)5 MockProducer (org.apache.kafka.clients.producer.MockProducer)5 ProducerRecord (org.apache.kafka.clients.producer.ProducerRecord)5 TimeoutException (org.apache.kafka.common.errors.TimeoutException)5 StreamsException (org.apache.kafka.streams.errors.StreamsException)3 Random (java.util.Random)1 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)1 Cluster (org.apache.kafka.common.Cluster)1 Node (org.apache.kafka.common.Node)1 PartitionInfo (org.apache.kafka.common.PartitionInfo)1 TopicPartition (org.apache.kafka.common.TopicPartition)1 StringSerializer (org.apache.kafka.common.serialization.StringSerializer)1 Windowed (org.apache.kafka.streams.kstream.Windowed)1