Search in sources :

Example 1 with Producer

use of org.apache.kafka.clients.producer.Producer in project apache-kafka-on-k8s by banzaicloud.

the class StreamThreadTest method shouldInjectSharedProducerForAllTasksUsingClientSupplierOnCreateIfEosDisabled.

@Test
public void shouldInjectSharedProducerForAllTasksUsingClientSupplierOnCreateIfEosDisabled() {
    internalTopologyBuilder.addSource(null, "source1", null, null, null, topic1);
    final StreamThread thread = createStreamThread(clientId, config, false);
    thread.setState(StreamThread.State.RUNNING);
    thread.rebalanceListener.onPartitionsRevoked(Collections.<TopicPartition>emptyList());
    final Map<TaskId, Set<TopicPartition>> activeTasks = new HashMap<>();
    final List<TopicPartition> assignedPartitions = new ArrayList<>();
    // assign single partition
    assignedPartitions.add(t1p1);
    assignedPartitions.add(t1p2);
    activeTasks.put(task1, Collections.singleton(t1p1));
    activeTasks.put(task2, Collections.singleton(t1p2));
    thread.taskManager().setAssignmentMetadata(activeTasks, Collections.<TaskId, Set<TopicPartition>>emptyMap());
    final MockConsumer<byte[], byte[]> mockConsumer = (MockConsumer<byte[], byte[]>) thread.consumer;
    mockConsumer.assign(assignedPartitions);
    Map<TopicPartition, Long> beginOffsets = new HashMap<>();
    beginOffsets.put(t1p1, 0L);
    beginOffsets.put(t1p2, 0L);
    mockConsumer.updateBeginningOffsets(beginOffsets);
    thread.rebalanceListener.onPartitionsAssigned(new HashSet<>(assignedPartitions));
    assertEquals(1, clientSupplier.producers.size());
    final Producer globalProducer = clientSupplier.producers.get(0);
    for (final Task task : thread.tasks().values()) {
        assertSame(globalProducer, ((RecordCollectorImpl) ((StreamTask) task).recordCollector()).producer());
    }
    assertSame(clientSupplier.consumer, thread.consumer);
    assertSame(clientSupplier.restoreConsumer, thread.restoreConsumer);
}
Also used : TaskId(org.apache.kafka.streams.processor.TaskId) Set(java.util.Set) HashSet(java.util.HashSet) HashMap(java.util.HashMap) ArrayList(java.util.ArrayList) Producer(org.apache.kafka.clients.producer.Producer) MockProducer(org.apache.kafka.clients.producer.MockProducer) TopicPartition(org.apache.kafka.common.TopicPartition) MockConsumer(org.apache.kafka.clients.consumer.MockConsumer) InternalStreamsBuilderTest(org.apache.kafka.streams.kstream.internals.InternalStreamsBuilderTest) Test(org.junit.Test)

Example 2 with Producer

use of org.apache.kafka.clients.producer.Producer in project nakadi by zalando.

the class KafkaTopicRepository method publishItem.

private static CompletableFuture<Exception> publishItem(final Producer<String, String> producer, final String topicId, final BatchItem item, final HystrixKafkaCircuitBreaker circuitBreaker) throws EventPublishingException {
    try {
        final CompletableFuture<Exception> result = new CompletableFuture<>();
        final ProducerRecord<String, String> kafkaRecord = new ProducerRecord<>(topicId, KafkaCursor.toKafkaPartition(item.getPartition()), item.getPartition(), item.dumpEventToString());
        circuitBreaker.markStart();
        producer.send(kafkaRecord, ((metadata, exception) -> {
            if (null != exception) {
                LOG.warn("Failed to publish to kafka topic {}", topicId, exception);
                item.updateStatusAndDetail(EventPublishingStatus.FAILED, "internal error");
                if (hasKafkaConnectionException(exception)) {
                    circuitBreaker.markFailure();
                } else {
                    circuitBreaker.markSuccessfully();
                }
                result.complete(exception);
            } else {
                item.updateStatusAndDetail(EventPublishingStatus.SUBMITTED, "");
                circuitBreaker.markSuccessfully();
                result.complete(null);
            }
        }));
        return result;
    } catch (final InterruptException e) {
        Thread.currentThread().interrupt();
        circuitBreaker.markSuccessfully();
        item.updateStatusAndDetail(EventPublishingStatus.FAILED, "internal error");
        throw new EventPublishingException("Error publishing message to kafka", e);
    } catch (final RuntimeException e) {
        circuitBreaker.markSuccessfully();
        item.updateStatusAndDetail(EventPublishingStatus.FAILED, "internal error");
        throw new EventPublishingException("Error publishing message to kafka", e);
    }
}
Also used : EventPublishingException(org.zalando.nakadi.exceptions.EventPublishingException) NotLeaderForPartitionException(org.apache.kafka.common.errors.NotLeaderForPartitionException) Collections.unmodifiableList(java.util.Collections.unmodifiableList) LoggerFactory(org.slf4j.LoggerFactory) TimeoutException(java.util.concurrent.TimeoutException) TopicRepositoryException(org.zalando.nakadi.exceptions.runtime.TopicRepositoryException) PARTITION_NOT_FOUND(org.zalando.nakadi.domain.CursorError.PARTITION_NOT_FOUND) ServiceUnavailableException(org.zalando.nakadi.exceptions.ServiceUnavailableException) Map(java.util.Map) RetryForSpecifiedTimeStrategy(org.echocat.jomon.runtime.concurrent.RetryForSpecifiedTimeStrategy) Consumer(org.apache.kafka.clients.consumer.Consumer) ZooKeeperHolder(org.zalando.nakadi.repository.zookeeper.ZooKeeperHolder) TopicPartition(org.apache.kafka.common.TopicPartition) TopicRepository(org.zalando.nakadi.repository.TopicRepository) Retryer(org.echocat.jomon.runtime.concurrent.Retryer) Collection(java.util.Collection) PartitionStatistics(org.zalando.nakadi.domain.PartitionStatistics) ConcurrentHashMap(java.util.concurrent.ConcurrentHashMap) Set(java.util.Set) ConfigType(kafka.server.ConfigType) PartitionInfo(org.apache.kafka.common.PartitionInfo) InvalidCursorException(org.zalando.nakadi.exceptions.InvalidCursorException) Collectors(java.util.stream.Collectors) TopicDeletionException(org.zalando.nakadi.exceptions.TopicDeletionException) Objects(java.util.Objects) ZkUtils(kafka.utils.ZkUtils) TopicExistsException(org.apache.kafka.common.errors.TopicExistsException) List(java.util.List) Stream(java.util.stream.Stream) Lists.newArrayList(com.google.common.collect.Lists.newArrayList) Timeline(org.zalando.nakadi.domain.Timeline) ZookeeperSettings(org.zalando.nakadi.repository.zookeeper.ZookeeperSettings) NULL_OFFSET(org.zalando.nakadi.domain.CursorError.NULL_OFFSET) BatchItem(org.zalando.nakadi.domain.BatchItem) Optional(java.util.Optional) UnknownTopicOrPartitionException(org.apache.kafka.common.errors.UnknownTopicOrPartitionException) AdminUtils(kafka.admin.AdminUtils) IntStream(java.util.stream.IntStream) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) NetworkException(org.apache.kafka.common.errors.NetworkException) NakadiCursor(org.zalando.nakadi.domain.NakadiCursor) NakadiSettings(org.zalando.nakadi.config.NakadiSettings) TopicCreationException(org.zalando.nakadi.exceptions.TopicCreationException) HashMap(java.util.HashMap) CompletableFuture(java.util.concurrent.CompletableFuture) TopicConfigException(org.zalando.nakadi.exceptions.runtime.TopicConfigException) UnknownServerException(org.apache.kafka.common.errors.UnknownServerException) ArrayList(java.util.ArrayList) ConcurrentMap(java.util.concurrent.ConcurrentMap) UUIDGenerator(org.zalando.nakadi.util.UUIDGenerator) InterruptException(org.apache.kafka.common.errors.InterruptException) EventPublishingStep(org.zalando.nakadi.domain.EventPublishingStep) Nullable(javax.annotation.Nullable) UNAVAILABLE(org.zalando.nakadi.domain.CursorError.UNAVAILABLE) NULL_PARTITION(org.zalando.nakadi.domain.CursorError.NULL_PARTITION) Logger(org.slf4j.Logger) Properties(java.util.Properties) Producer(org.apache.kafka.clients.producer.Producer) PartitionEndStatistics(org.zalando.nakadi.domain.PartitionEndStatistics) ExecutionException(java.util.concurrent.ExecutionException) TimeUnit(java.util.concurrent.TimeUnit) EventConsumer(org.zalando.nakadi.repository.EventConsumer) Collectors.toList(java.util.stream.Collectors.toList) EventPublishingStatus(org.zalando.nakadi.domain.EventPublishingStatus) Preconditions(com.google.common.base.Preconditions) Collections(java.util.Collections) RackAwareMode(kafka.admin.RackAwareMode) CompletableFuture(java.util.concurrent.CompletableFuture) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) InterruptException(org.apache.kafka.common.errors.InterruptException) EventPublishingException(org.zalando.nakadi.exceptions.EventPublishingException) EventPublishingException(org.zalando.nakadi.exceptions.EventPublishingException) NotLeaderForPartitionException(org.apache.kafka.common.errors.NotLeaderForPartitionException) TimeoutException(java.util.concurrent.TimeoutException) TopicRepositoryException(org.zalando.nakadi.exceptions.runtime.TopicRepositoryException) ServiceUnavailableException(org.zalando.nakadi.exceptions.ServiceUnavailableException) InvalidCursorException(org.zalando.nakadi.exceptions.InvalidCursorException) TopicDeletionException(org.zalando.nakadi.exceptions.TopicDeletionException) TopicExistsException(org.apache.kafka.common.errors.TopicExistsException) UnknownTopicOrPartitionException(org.apache.kafka.common.errors.UnknownTopicOrPartitionException) NetworkException(org.apache.kafka.common.errors.NetworkException) TopicCreationException(org.zalando.nakadi.exceptions.TopicCreationException) TopicConfigException(org.zalando.nakadi.exceptions.runtime.TopicConfigException) UnknownServerException(org.apache.kafka.common.errors.UnknownServerException) InterruptException(org.apache.kafka.common.errors.InterruptException) ExecutionException(java.util.concurrent.ExecutionException)

Example 3 with Producer

use of org.apache.kafka.clients.producer.Producer in project atlas by apache.

the class KafkaNotificationMockTest method shouldThrowExceptionIfProducerFails.

@Test
@SuppressWarnings("unchecked")
public void shouldThrowExceptionIfProducerFails() throws NotificationException, ExecutionException, InterruptedException {
    Properties configProperties = mock(Properties.class);
    KafkaNotification kafkaNotification = new KafkaNotification(configProperties);
    Producer producer = mock(Producer.class);
    String topicName = kafkaNotification.getTopicName(NotificationInterface.NotificationType.HOOK);
    String message = "This is a test message";
    Future returnValue = mock(Future.class);
    when(returnValue.get()).thenThrow(new RuntimeException("Simulating exception"));
    ProducerRecord expectedRecord = new ProducerRecord(topicName, message);
    when(producer.send(expectedRecord)).thenReturn(returnValue);
    try {
        kafkaNotification.sendInternalToProducer(producer, NotificationInterface.NotificationType.HOOK, Arrays.asList(new String[] { message }));
        fail("Should have thrown NotificationException");
    } catch (NotificationException e) {
        assertEquals(e.getFailedMessages().size(), 1);
        assertEquals(e.getFailedMessages().get(0), "This is a test message");
    }
}
Also used : Producer(org.apache.kafka.clients.producer.Producer) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) NotificationException(org.apache.atlas.notification.NotificationException) Future(java.util.concurrent.Future) Properties(java.util.Properties) Test(org.testng.annotations.Test)

Example 4 with Producer

use of org.apache.kafka.clients.producer.Producer in project atlas by apache.

the class KafkaNotificationMockTest method shouldSendMessagesSuccessfully.

@Test
@SuppressWarnings("unchecked")
public void shouldSendMessagesSuccessfully() throws NotificationException, ExecutionException, InterruptedException {
    Properties configProperties = mock(Properties.class);
    KafkaNotification kafkaNotification = new KafkaNotification(configProperties);
    Producer producer = mock(Producer.class);
    String topicName = kafkaNotification.getTopicName(NotificationInterface.NotificationType.HOOK);
    String message = "This is a test message";
    Future returnValue = mock(Future.class);
    TopicPartition topicPartition = new TopicPartition(topicName, 0);
    when(returnValue.get()).thenReturn(new RecordMetadata(topicPartition, 0, 0, 0, Long.valueOf(0), 0, 0));
    ProducerRecord expectedRecord = new ProducerRecord(topicName, message);
    when(producer.send(expectedRecord)).thenReturn(returnValue);
    kafkaNotification.sendInternalToProducer(producer, NotificationInterface.NotificationType.HOOK, Arrays.asList(new String[] { message }));
    verify(producer).send(expectedRecord);
}
Also used : RecordMetadata(org.apache.kafka.clients.producer.RecordMetadata) Producer(org.apache.kafka.clients.producer.Producer) TopicPartition(org.apache.kafka.common.TopicPartition) ProducerRecord(org.apache.kafka.clients.producer.ProducerRecord) Future(java.util.concurrent.Future) Properties(java.util.Properties) Test(org.testng.annotations.Test)

Example 5 with Producer

use of org.apache.kafka.clients.producer.Producer in project samza by apache.

the class TestKafkaSystemProducerJava method testInstantiateProducer.

@Test
public void testInstantiateProducer() {
    KafkaSystemProducer ksp = new KafkaSystemProducer("SysName", new ExponentialSleepStrategy(2.0, 200, 10000), new AbstractFunction0<Producer<byte[], byte[]>>() {

        @Override
        public Producer<byte[], byte[]> apply() {
            return new KafkaProducer<>(new HashMap<String, Object>());
        }
    }, new KafkaSystemProducerMetrics("SysName", new MetricsRegistryMap()), new AbstractFunction0<Object>() {

        @Override
        public Object apply() {
            return System.currentTimeMillis();
        }
    }, false);
    long now = System.currentTimeMillis();
    assertTrue((Long) ksp.clock().apply() >= now);
}
Also used : HashMap(java.util.HashMap) ExponentialSleepStrategy(org.apache.samza.util.ExponentialSleepStrategy) KafkaProducer(org.apache.kafka.clients.producer.KafkaProducer) Producer(org.apache.kafka.clients.producer.Producer) MetricsRegistryMap(org.apache.samza.metrics.MetricsRegistryMap) Test(org.junit.Test)

Aggregations

Producer (org.apache.kafka.clients.producer.Producer)16 Properties (java.util.Properties)13 ProducerRecord (org.apache.kafka.clients.producer.ProducerRecord)12 TopicPartition (org.apache.kafka.common.TopicPartition)9 ArrayList (java.util.ArrayList)7 List (java.util.List)7 HashMap (java.util.HashMap)6 Objects (java.util.Objects)6 Future (java.util.concurrent.Future)6 Consumer (org.apache.kafka.clients.consumer.Consumer)6 Test (org.junit.Test)6 Test (org.testng.annotations.Test)6 Set (java.util.Set)5 Arrays (java.util.Arrays)4 Collection (java.util.Collection)4 Collections (java.util.Collections)4 Map (java.util.Map)4 ProcessorContext (org.apache.kafka.streams.processor.ProcessorContext)4 CONSUMER (brave.Span.Kind.CONSUMER)3 PRODUCER (brave.Span.Kind.PRODUCER)3