Search in sources :

Example 1 with StringSerializer

use of org.apache.kafka.common.serialization.StringSerializer in project kafka by apache.

the class KafkaProducerTest method testMetadataFetch.

@PrepareOnlyThisForTest(Metadata.class)
@Test
public void testMetadataFetch() throws Exception {
    Properties props = new Properties();
    props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9999");
    KafkaProducer<String, String> producer = new KafkaProducer<>(props, new StringSerializer(), new StringSerializer());
    Metadata metadata = PowerMock.createNiceMock(Metadata.class);
    MemberModifier.field(KafkaProducer.class, "metadata").set(producer, metadata);
    String topic = "topic";
    ProducerRecord<String, String> record = new ProducerRecord<>(topic, "value");
    Collection<Node> nodes = Collections.singletonList(new Node(0, "host1", 1000));
    final Cluster emptyCluster = new Cluster(null, nodes, Collections.<PartitionInfo>emptySet(), Collections.<String>emptySet(), Collections.<String>emptySet());
    final Cluster cluster = new Cluster("dummy", Collections.singletonList(new Node(0, "host1", 1000)), Arrays.asList(new PartitionInfo(topic, 0, null, null, null)), Collections.<String>emptySet(), Collections.<String>emptySet());
    // Expect exactly one fetch for each attempt to refresh while topic metadata is not available
    final int refreshAttempts = 5;
    EasyMock.expect(metadata.fetch()).andReturn(emptyCluster).times(refreshAttempts - 1);
    EasyMock.expect(metadata.fetch()).andReturn(cluster).once();
    EasyMock.expect(metadata.fetch()).andThrow(new IllegalStateException("Unexpected call to metadata.fetch()")).anyTimes();
    PowerMock.replay(metadata);
    producer.send(record);
    PowerMock.verify(metadata);
    // Expect exactly one fetch if topic metadata is available
    PowerMock.reset(metadata);
    EasyMock.expect(metadata.fetch()).andReturn(cluster).once();
    EasyMock.expect(metadata.fetch()).andThrow(new IllegalStateException("Unexpected call to metadata.fetch()")).anyTimes();
    PowerMock.replay(metadata);
    producer.send(record, null);
    PowerMock.verify(metadata);
    // Expect exactly one fetch if topic metadata is available
    PowerMock.reset(metadata);
    EasyMock.expect(metadata.fetch()).andReturn(cluster).once();
    EasyMock.expect(metadata.fetch()).andThrow(new IllegalStateException("Unexpected call to metadata.fetch()")).anyTimes();
    PowerMock.replay(metadata);
    producer.partitionsFor(topic);
    PowerMock.verify(metadata);
}
Also used : Node(org.apache.kafka.common.Node) Metadata(org.apache.kafka.clients.Metadata) Cluster(org.apache.kafka.common.Cluster) Properties(java.util.Properties) PartitionInfo(org.apache.kafka.common.PartitionInfo) StringSerializer(org.apache.kafka.common.serialization.StringSerializer) PrepareOnlyThisForTest(org.powermock.core.classloader.annotations.PrepareOnlyThisForTest) Test(org.junit.Test) PrepareOnlyThisForTest(org.powermock.core.classloader.annotations.PrepareOnlyThisForTest)

Example 2 with StringSerializer

use of org.apache.kafka.common.serialization.StringSerializer in project kafka by apache.

the class KafkaProducerTest method testPartitionerClose.

@Test
public void testPartitionerClose() throws Exception {
    try {
        Properties props = new Properties();
        props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9999");
        props.setProperty(ProducerConfig.PARTITIONER_CLASS_CONFIG, MockPartitioner.class.getName());
        KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props, new StringSerializer(), new StringSerializer());
        Assert.assertEquals(1, MockPartitioner.INIT_COUNT.get());
        Assert.assertEquals(0, MockPartitioner.CLOSE_COUNT.get());
        producer.close();
        Assert.assertEquals(1, MockPartitioner.INIT_COUNT.get());
        Assert.assertEquals(1, MockPartitioner.CLOSE_COUNT.get());
    } finally {
        // cleanup since we are using mutable static variables in MockPartitioner
        MockPartitioner.resetCounters();
    }
}
Also used : MockPartitioner(org.apache.kafka.test.MockPartitioner) Properties(java.util.Properties) StringSerializer(org.apache.kafka.common.serialization.StringSerializer) Test(org.junit.Test) PrepareOnlyThisForTest(org.powermock.core.classloader.annotations.PrepareOnlyThisForTest)

Example 3 with StringSerializer

use of org.apache.kafka.common.serialization.StringSerializer in project kafka by apache.

the class KafkaProducerTest method testInterceptorConstructClose.

@Test
public void testInterceptorConstructClose() throws Exception {
    try {
        Properties props = new Properties();
        // test with client ID assigned by KafkaProducer
        props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9999");
        props.setProperty(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, MockProducerInterceptor.class.getName());
        props.setProperty(MockProducerInterceptor.APPEND_STRING_PROP, "something");
        KafkaProducer<String, String> producer = new KafkaProducer<String, String>(props, new StringSerializer(), new StringSerializer());
        Assert.assertEquals(1, MockProducerInterceptor.INIT_COUNT.get());
        Assert.assertEquals(0, MockProducerInterceptor.CLOSE_COUNT.get());
        // Cluster metadata will only be updated on calling onSend.
        Assert.assertNull(MockProducerInterceptor.CLUSTER_META.get());
        producer.close();
        Assert.assertEquals(1, MockProducerInterceptor.INIT_COUNT.get());
        Assert.assertEquals(1, MockProducerInterceptor.CLOSE_COUNT.get());
    } finally {
        // cleanup since we are using mutable static variables in MockProducerInterceptor
        MockProducerInterceptor.resetCounters();
    }
}
Also used : MockProducerInterceptor(org.apache.kafka.test.MockProducerInterceptor) Properties(java.util.Properties) StringSerializer(org.apache.kafka.common.serialization.StringSerializer) Test(org.junit.Test) PrepareOnlyThisForTest(org.powermock.core.classloader.annotations.PrepareOnlyThisForTest)

Example 4 with StringSerializer

use of org.apache.kafka.common.serialization.StringSerializer in project kafka by apache.

the class MockProducerTest method testPartitioner.

@Test
public void testPartitioner() throws Exception {
    PartitionInfo partitionInfo0 = new PartitionInfo(topic, 0, null, null, null);
    PartitionInfo partitionInfo1 = new PartitionInfo(topic, 1, null, null, null);
    Cluster cluster = new Cluster(null, new ArrayList<Node>(0), asList(partitionInfo0, partitionInfo1), Collections.<String>emptySet(), Collections.<String>emptySet());
    MockProducer<String, String> producer = new MockProducer<>(cluster, true, new DefaultPartitioner(), new StringSerializer(), new StringSerializer());
    ProducerRecord<String, String> record = new ProducerRecord<>(topic, "key", "value");
    Future<RecordMetadata> metadata = producer.send(record);
    assertEquals("Partition should be correct", 1, metadata.get().partition());
    producer.clear();
    assertEquals("Clear should erase our history", 0, producer.history().size());
}
Also used : DefaultPartitioner(org.apache.kafka.clients.producer.internals.DefaultPartitioner) Node(org.apache.kafka.common.Node) Cluster(org.apache.kafka.common.Cluster) PartitionInfo(org.apache.kafka.common.PartitionInfo) StringSerializer(org.apache.kafka.common.serialization.StringSerializer) Test(org.junit.Test)

Example 5 with StringSerializer

use of org.apache.kafka.common.serialization.StringSerializer in project beam by apache.

the class ResumeFromCheckpointStreamingTest method produce.

private static void produce(Map<String, Instant> messages) {
    Properties producerProps = new Properties();
    producerProps.putAll(EMBEDDED_KAFKA_CLUSTER.getProps());
    producerProps.put("request.required.acks", 1);
    producerProps.put("bootstrap.servers", EMBEDDED_KAFKA_CLUSTER.getBrokerList());
    Serializer<String> stringSerializer = new StringSerializer();
    Serializer<Instant> instantSerializer = new InstantSerializer();
    try (@SuppressWarnings("unchecked") KafkaProducer<String, Instant> kafkaProducer = new KafkaProducer(producerProps, stringSerializer, instantSerializer)) {
        for (Map.Entry<String, Instant> en : messages.entrySet()) {
            kafkaProducer.send(new ProducerRecord<>(TOPIC, en.getKey(), en.getValue()));
        }
        kafkaProducer.close();
    }
}
Also used : KafkaProducer(org.apache.kafka.clients.producer.KafkaProducer) InstantSerializer(org.apache.beam.sdk.io.kafka.serialization.InstantSerializer) Instant(org.joda.time.Instant) Properties(java.util.Properties) StringSerializer(org.apache.kafka.common.serialization.StringSerializer) Map(java.util.Map) ImmutableMap(com.google.common.collect.ImmutableMap)

Aggregations

StringSerializer (org.apache.kafka.common.serialization.StringSerializer)11 Properties (java.util.Properties)10 Test (org.junit.Test)5 KafkaProducer (org.apache.kafka.clients.producer.KafkaProducer)4 PrepareOnlyThisForTest (org.powermock.core.classloader.annotations.PrepareOnlyThisForTest)4 Cluster (org.apache.kafka.common.Cluster)3 Node (org.apache.kafka.common.Node)3 PartitionInfo (org.apache.kafka.common.PartitionInfo)3 ISE (io.druid.java.util.common.ISE)2 File (java.io.File)2 IOException (java.io.IOException)2 InputStream (java.io.InputStream)2 Map (java.util.Map)2 TopicExistsException (kafka.common.TopicExistsException)2 ZkClient (org.I0Itec.zkclient.ZkClient)2 Metadata (org.apache.kafka.clients.Metadata)2 ProducerRecord (org.apache.kafka.clients.producer.ProducerRecord)2 DateTime (org.joda.time.DateTime)2 DateTimeZone (org.joda.time.DateTimeZone)2 DateTimeFormatter (org.joda.time.format.DateTimeFormatter)2