Search in sources :

Example 6 with Admin

use of org.apache.kafka.clients.admin.Admin in project kafka by apache.

the class JoinStoreIntegrationTest method streamJoinChangelogTopicShouldBeConfiguredWithDeleteOnlyCleanupPolicy.

@Test
public void streamJoinChangelogTopicShouldBeConfiguredWithDeleteOnlyCleanupPolicy() throws Exception {
    STREAMS_CONFIG.put(StreamsConfig.APPLICATION_ID_CONFIG, APP_ID + "-changelog-cleanup-policy");
    final StreamsBuilder builder = new StreamsBuilder();
    final KStream<String, Integer> left = builder.stream(INPUT_TOPIC_LEFT, Consumed.with(Serdes.String(), Serdes.Integer()));
    final KStream<String, Integer> right = builder.stream(INPUT_TOPIC_RIGHT, Consumed.with(Serdes.String(), Serdes.Integer()));
    final CountDownLatch latch = new CountDownLatch(1);
    left.join(right, Integer::sum, JoinWindows.of(ofMillis(100)), StreamJoined.with(Serdes.String(), Serdes.Integer(), Serdes.Integer()).withStoreName("join-store"));
    try (final KafkaStreams kafkaStreams = new KafkaStreams(builder.build(), STREAMS_CONFIG);
        final Admin admin = Admin.create(ADMIN_CONFIG)) {
        kafkaStreams.setStateListener((newState, oldState) -> {
            if (newState == KafkaStreams.State.RUNNING) {
                latch.countDown();
            }
        });
        kafkaStreams.start();
        latch.await();
        final Collection<ConfigResource> changelogTopics = Stream.of("join-store-integration-test-changelog-cleanup-policy-join-store-this-join-store-changelog", "join-store-integration-test-changelog-cleanup-policy-join-store-other-join-store-changelog").map(name -> new ConfigResource(Type.TOPIC, name)).collect(Collectors.toList());
        final Map<ConfigResource, org.apache.kafka.clients.admin.Config> topicConfig = admin.describeConfigs(changelogTopics).all().get();
        topicConfig.values().forEach(tc -> assertThat(tc.get("cleanup.policy").value(), is("delete")));
    }
}
Also used : StreamsConfig(org.apache.kafka.streams.StreamsConfig) BeforeClass(org.junit.BeforeClass) QueryableStoreTypes.keyValueStore(org.apache.kafka.streams.state.QueryableStoreTypes.keyValueStore) Assert.assertThrows(org.junit.Assert.assertThrows) IntegrationTest(org.apache.kafka.test.IntegrationTest) KStream(org.apache.kafka.streams.kstream.KStream) UnknownStateStoreException(org.apache.kafka.streams.errors.UnknownStateStoreException) StreamJoined(org.apache.kafka.streams.kstream.StreamJoined) ConfigResource(org.apache.kafka.common.config.ConfigResource) JoinWindows(org.apache.kafka.streams.kstream.JoinWindows) EmbeddedKafkaCluster(org.apache.kafka.streams.integration.utils.EmbeddedKafkaCluster) Map(java.util.Map) After(org.junit.After) Admin(org.apache.kafka.clients.admin.Admin) Serdes(org.apache.kafka.common.serialization.Serdes) MatcherAssert.assertThat(org.hamcrest.MatcherAssert.assertThat) StoreQueryParameters.fromNameAndType(org.apache.kafka.streams.StoreQueryParameters.fromNameAndType) Before(org.junit.Before) StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) AfterClass(org.junit.AfterClass) Properties(java.util.Properties) TestUtils(org.apache.kafka.test.TestUtils) Consumed(org.apache.kafka.streams.kstream.Consumed) Collection(java.util.Collection) AdminClientConfig(org.apache.kafka.clients.admin.AdminClientConfig) ConsumerConfig(org.apache.kafka.clients.consumer.ConsumerConfig) Test(org.junit.Test) IOException(java.io.IOException) Category(org.junit.experimental.categories.Category) Collectors(java.util.stream.Collectors) CountDownLatch(java.util.concurrent.CountDownLatch) Stream(java.util.stream.Stream) Rule(org.junit.Rule) Matchers.is(org.hamcrest.Matchers.is) KafkaStreams(org.apache.kafka.streams.KafkaStreams) Duration.ofMillis(java.time.Duration.ofMillis) Type(org.apache.kafka.common.config.ConfigResource.Type) TemporaryFolder(org.junit.rules.TemporaryFolder) KafkaStreams(org.apache.kafka.streams.KafkaStreams) StreamsConfig(org.apache.kafka.streams.StreamsConfig) AdminClientConfig(org.apache.kafka.clients.admin.AdminClientConfig) ConsumerConfig(org.apache.kafka.clients.consumer.ConsumerConfig) CountDownLatch(java.util.concurrent.CountDownLatch) Admin(org.apache.kafka.clients.admin.Admin) ConfigResource(org.apache.kafka.common.config.ConfigResource) StreamsBuilder(org.apache.kafka.streams.StreamsBuilder) IntegrationTest(org.apache.kafka.test.IntegrationTest) Test(org.junit.Test)

Example 7 with Admin

use of org.apache.kafka.clients.admin.Admin in project kafka by apache.

the class InternalTopicIntegrationTest method getTopicProperties.

private Properties getTopicProperties(final String changelog) {
    try (final Admin adminClient = createAdminClient()) {
        final ConfigResource configResource = new ConfigResource(ConfigResource.Type.TOPIC, changelog);
        try {
            final Config config = adminClient.describeConfigs(Collections.singletonList(configResource)).values().get(configResource).get();
            final Properties properties = new Properties();
            for (final ConfigEntry configEntry : config.entries()) {
                if (configEntry.source() == ConfigEntry.ConfigSource.DYNAMIC_TOPIC_CONFIG) {
                    properties.put(configEntry.name(), configEntry.value());
                }
            }
            return properties;
        } catch (final InterruptedException | ExecutionException e) {
            throw new RuntimeException(e);
        }
    }
}
Also used : ConfigEntry(org.apache.kafka.clients.admin.ConfigEntry) ConsumerConfig(org.apache.kafka.clients.consumer.ConsumerConfig) Config(org.apache.kafka.clients.admin.Config) StreamsConfig(org.apache.kafka.streams.StreamsConfig) LogConfig(kafka.log.LogConfig) ProducerConfig(org.apache.kafka.clients.producer.ProducerConfig) AdminClientConfig(org.apache.kafka.clients.admin.AdminClientConfig) Admin(org.apache.kafka.clients.admin.Admin) Properties(java.util.Properties) ExecutionException(java.util.concurrent.ExecutionException) ConfigResource(org.apache.kafka.common.config.ConfigResource)

Example 8 with Admin

use of org.apache.kafka.clients.admin.Admin in project kafka by apache.

the class KafkaEmbedded method createTopic.

/**
 * Create a Kafka topic with the given parameters.
 *
 * @param topic       The name of the topic.
 * @param partitions  The number of partitions for this topic.
 * @param replication The replication factor for (partitions of) this topic.
 * @param topicConfig Additional topic-level configuration settings.
 */
public void createTopic(final String topic, final int partitions, final int replication, final Map<String, String> topicConfig) {
    log.debug("Creating topic { name: {}, partitions: {}, replication: {}, config: {} }", topic, partitions, replication, topicConfig);
    final NewTopic newTopic = new NewTopic(topic, partitions, (short) replication);
    newTopic.configs(topicConfig);
    try (final Admin adminClient = createAdminClient()) {
        adminClient.createTopics(Collections.singletonList(newTopic)).all().get();
    } catch (final InterruptedException | ExecutionException e) {
        throw new RuntimeException(e);
    }
}
Also used : NewTopic(org.apache.kafka.clients.admin.NewTopic) Admin(org.apache.kafka.clients.admin.Admin) ExecutionException(java.util.concurrent.ExecutionException)

Example 9 with Admin

use of org.apache.kafka.clients.admin.Admin in project kafka by apache.

the class ClientUtilsTest method fetchEndOffsetsShouldReturnEmptyMapIfPartitionsAreEmpty.

@Test
public void fetchEndOffsetsShouldReturnEmptyMapIfPartitionsAreEmpty() {
    final Admin adminClient = EasyMock.createMock(AdminClient.class);
    assertTrue(fetchEndOffsets(emptySet(), adminClient).isEmpty());
}
Also used : Admin(org.apache.kafka.clients.admin.Admin) Test(org.junit.Test)

Example 10 with Admin

use of org.apache.kafka.clients.admin.Admin in project kafka by apache.

the class ClientUtilsTest method fetchEndOffsetsShouldRethrowInterruptedExceptionAsStreamsException.

@Test
public void fetchEndOffsetsShouldRethrowInterruptedExceptionAsStreamsException() throws Exception {
    final Admin adminClient = EasyMock.createMock(AdminClient.class);
    final ListOffsetsResult result = EasyMock.createNiceMock(ListOffsetsResult.class);
    final KafkaFuture<Map<TopicPartition, ListOffsetsResultInfo>> allFuture = EasyMock.createMock(KafkaFuture.class);
    EasyMock.expect(adminClient.listOffsets(EasyMock.anyObject())).andStubReturn(result);
    EasyMock.expect(result.all()).andStubReturn(allFuture);
    EasyMock.expect(allFuture.get()).andThrow(new InterruptedException());
    replay(adminClient, result, allFuture);
    assertThrows(StreamsException.class, () -> fetchEndOffsets(PARTITIONS, adminClient));
    verify(adminClient);
}
Also used : ListOffsetsResult(org.apache.kafka.clients.admin.ListOffsetsResult) Admin(org.apache.kafka.clients.admin.Admin) Map(java.util.Map) Test(org.junit.Test)

Aggregations

Admin (org.apache.kafka.clients.admin.Admin)27 ExecutionException (java.util.concurrent.ExecutionException)12 Map (java.util.Map)11 Properties (java.util.Properties)9 HashMap (java.util.HashMap)8 TopicPartition (org.apache.kafka.common.TopicPartition)8 NewTopic (org.apache.kafka.clients.admin.NewTopic)7 AdminClientConfig (org.apache.kafka.clients.admin.AdminClientConfig)6 Test (org.junit.Test)6 Collection (java.util.Collection)5 ConfigResource (org.apache.kafka.common.config.ConfigResource)5 Arrays (java.util.Arrays)4 Collections (java.util.Collections)4 Optional (java.util.Optional)4 Set (java.util.Set)4 Config (org.apache.kafka.clients.admin.Config)4 ListOffsetsResult (org.apache.kafka.clients.admin.ListOffsetsResult)4 MirrorMakerConfig (org.apache.kafka.connect.mirror.MirrorMakerConfig)4 Logger (org.slf4j.Logger)4 IOException (java.io.IOException)3