Search in sources :

Example 1 with ListOffsetsResult

use of org.apache.kafka.clients.admin.ListOffsetsResult in project kafka by apache.

the class TopicAdmin method endOffsets.

/**
 * Fetch the most recent offset for each of the supplied {@link TopicPartition} objects.
 *
 * @param partitions the topic partitions
 * @return the map of offset for each topic partition, or an empty map if the supplied partitions
 *         are null or empty
 * @throws UnsupportedVersionException if the admin client cannot read end offsets
 * @throws TimeoutException if the offset metadata could not be fetched before the amount of time allocated
 *         by {@code request.timeout.ms} expires, and this call can be retried
 * @throws LeaderNotAvailableException if the leader was not available and this call can be retried
 * @throws RetriableException if a retriable error occurs, or the thread is interrupted while attempting
 *         to perform this operation
 * @throws ConnectException if a non retriable error occurs
 */
public Map<TopicPartition, Long> endOffsets(Set<TopicPartition> partitions) {
    if (partitions == null || partitions.isEmpty()) {
        return Collections.emptyMap();
    }
    Map<TopicPartition, OffsetSpec> offsetSpecMap = partitions.stream().collect(Collectors.toMap(Function.identity(), tp -> OffsetSpec.latest()));
    ListOffsetsResult resultFuture = admin.listOffsets(offsetSpecMap);
    // Get the individual result for each topic partition so we have better error messages
    Map<TopicPartition, Long> result = new HashMap<>();
    for (TopicPartition partition : partitions) {
        try {
            ListOffsetsResultInfo info = resultFuture.partitionResult(partition).get();
            result.put(partition, info.offset());
        } catch (ExecutionException e) {
            Throwable cause = e.getCause();
            String topic = partition.topic();
            if (cause instanceof AuthorizationException) {
                String msg = String.format("Not authorized to get the end offsets for topic '%s' on brokers at %s", topic, bootstrapServers());
                throw new ConnectException(msg, e);
            } else if (cause instanceof UnsupportedVersionException) {
                // Should theoretically never happen, because this method is the same as what the consumer uses and therefore
                // should exist in the broker since before the admin client was added
                String msg = String.format("API to get the get the end offsets for topic '%s' is unsupported on brokers at %s", topic, bootstrapServers());
                throw new UnsupportedVersionException(msg, e);
            } else if (cause instanceof TimeoutException) {
                String msg = String.format("Timed out while waiting to get end offsets for topic '%s' on brokers at %s", topic, bootstrapServers());
                throw new TimeoutException(msg, e);
            } else if (cause instanceof LeaderNotAvailableException) {
                String msg = String.format("Unable to get end offsets during leader election for topic '%s' on brokers at %s", topic, bootstrapServers());
                throw new LeaderNotAvailableException(msg, e);
            } else if (cause instanceof org.apache.kafka.common.errors.RetriableException) {
                throw (org.apache.kafka.common.errors.RetriableException) cause;
            } else {
                String msg = String.format("Error while getting end offsets for topic '%s' on brokers at %s", topic, bootstrapServers());
                throw new ConnectException(msg, e);
            }
        } catch (InterruptedException e) {
            Thread.interrupted();
            String msg = String.format("Interrupted while attempting to read end offsets for topic '%s' on brokers at %s", partition.topic(), bootstrapServers());
            throw new RetriableException(msg, e);
        }
    }
    return result;
}
Also used : Config(org.apache.kafka.clients.admin.Config) Arrays(java.util.Arrays) DescribeTopicsOptions(org.apache.kafka.clients.admin.DescribeTopicsOptions) LoggerFactory(org.slf4j.LoggerFactory) HashMap(java.util.HashMap) ConfigEntry(org.apache.kafka.clients.admin.ConfigEntry) ClusterAuthorizationException(org.apache.kafka.common.errors.ClusterAuthorizationException) LeaderNotAvailableException(org.apache.kafka.common.errors.LeaderNotAvailableException) Function(java.util.function.Function) HashSet(java.util.HashSet) ListOffsetsResult(org.apache.kafka.clients.admin.ListOffsetsResult) ConfigResource(org.apache.kafka.common.config.ConfigResource) Duration(java.time.Duration) Map(java.util.Map) Admin(org.apache.kafka.clients.admin.Admin) TopicDescription(org.apache.kafka.clients.admin.TopicDescription) TopicConfig(org.apache.kafka.common.config.TopicConfig) Utils(org.apache.kafka.common.utils.Utils) TopicPartition(org.apache.kafka.common.TopicPartition) InvalidConfigurationException(org.apache.kafka.common.errors.InvalidConfigurationException) TimeoutException(org.apache.kafka.common.errors.TimeoutException) Logger(org.slf4j.Logger) DescribeConfigsOptions(org.apache.kafka.clients.admin.DescribeConfigsOptions) AuthorizationException(org.apache.kafka.common.errors.AuthorizationException) AdminClientConfig(org.apache.kafka.clients.admin.AdminClientConfig) Collection(java.util.Collection) NewTopic(org.apache.kafka.clients.admin.NewTopic) Set(java.util.Set) KafkaFuture(org.apache.kafka.common.KafkaFuture) ConfigException(org.apache.kafka.common.config.ConfigException) Collectors(java.util.stream.Collectors) OffsetSpec(org.apache.kafka.clients.admin.OffsetSpec) Objects(java.util.Objects) ExecutionException(java.util.concurrent.ExecutionException) ListOffsetsResultInfo(org.apache.kafka.clients.admin.ListOffsetsResult.ListOffsetsResultInfo) RetriableException(org.apache.kafka.connect.errors.RetriableException) TopicExistsException(org.apache.kafka.common.errors.TopicExistsException) ConnectException(org.apache.kafka.connect.errors.ConnectException) TopicAuthorizationException(org.apache.kafka.common.errors.TopicAuthorizationException) UnsupportedVersionException(org.apache.kafka.common.errors.UnsupportedVersionException) Optional(java.util.Optional) UnknownTopicOrPartitionException(org.apache.kafka.common.errors.UnknownTopicOrPartitionException) CreateTopicsOptions(org.apache.kafka.clients.admin.CreateTopicsOptions) Collections(java.util.Collections) ListOffsetsResultInfo(org.apache.kafka.clients.admin.ListOffsetsResult.ListOffsetsResultInfo) HashMap(java.util.HashMap) ClusterAuthorizationException(org.apache.kafka.common.errors.ClusterAuthorizationException) AuthorizationException(org.apache.kafka.common.errors.AuthorizationException) TopicAuthorizationException(org.apache.kafka.common.errors.TopicAuthorizationException) LeaderNotAvailableException(org.apache.kafka.common.errors.LeaderNotAvailableException) ListOffsetsResult(org.apache.kafka.clients.admin.ListOffsetsResult) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetSpec(org.apache.kafka.clients.admin.OffsetSpec) ExecutionException(java.util.concurrent.ExecutionException) ConnectException(org.apache.kafka.connect.errors.ConnectException) UnsupportedVersionException(org.apache.kafka.common.errors.UnsupportedVersionException) TimeoutException(org.apache.kafka.common.errors.TimeoutException) RetriableException(org.apache.kafka.connect.errors.RetriableException)

Example 2 with ListOffsetsResult

use of org.apache.kafka.clients.admin.ListOffsetsResult in project kafka by apache.

the class KafkaStreamsTest method shouldReturnEmptyLocalStorePartitionLags.

@Test
public void shouldReturnEmptyLocalStorePartitionLags() {
    // Mock all calls made to compute the offset lags,
    final ListOffsetsResult result = EasyMock.mock(ListOffsetsResult.class);
    final KafkaFutureImpl<Map<TopicPartition, ListOffsetsResultInfo>> allFuture = new KafkaFutureImpl<>();
    allFuture.complete(Collections.emptyMap());
    EasyMock.expect(result.all()).andReturn(allFuture);
    final MockAdminClient mockAdminClient = EasyMock.partialMockBuilder(MockAdminClient.class).addMockedMethod("listOffsets", Map.class).createMock();
    EasyMock.expect(mockAdminClient.listOffsets(anyObject())).andStubReturn(result);
    final MockClientSupplier mockClientSupplier = EasyMock.partialMockBuilder(MockClientSupplier.class).addMockedMethod("getAdmin").createMock();
    EasyMock.expect(mockClientSupplier.getAdmin(anyObject())).andReturn(mockAdminClient);
    EasyMock.replay(result, mockAdminClient, mockClientSupplier);
    try (final KafkaStreams streams = new KafkaStreams(getBuilderWithSource().build(), props, mockClientSupplier, time)) {
        streams.start();
        assertEquals(0, streams.allLocalStorePartitionLags().size());
    }
}
Also used : ListOffsetsResult(org.apache.kafka.clients.admin.ListOffsetsResult) MockClientSupplier(org.apache.kafka.test.MockClientSupplier) MockAdminClient(org.apache.kafka.clients.admin.MockAdminClient) KafkaFutureImpl(org.apache.kafka.common.internals.KafkaFutureImpl) Map(java.util.Map) HashMap(java.util.HashMap) PrepareForTest(org.powermock.core.classloader.annotations.PrepareForTest) Test(org.junit.Test)

Example 3 with ListOffsetsResult

use of org.apache.kafka.clients.admin.ListOffsetsResult in project kafka by apache.

the class AssignmentTestUtils method createMockAdminClientForAssignor.

// If you don't care about setting the end offsets for each specific topic partition, the helper method
// getTopicPartitionOffsetMap is useful for building this input map for all partitions
public static AdminClient createMockAdminClientForAssignor(final Map<TopicPartition, Long> changelogEndOffsets) {
    final AdminClient adminClient = EasyMock.createMock(AdminClient.class);
    final ListOffsetsResult result = EasyMock.createNiceMock(ListOffsetsResult.class);
    final KafkaFutureImpl<Map<TopicPartition, ListOffsetsResultInfo>> allFuture = new KafkaFutureImpl<>();
    allFuture.complete(changelogEndOffsets.entrySet().stream().collect(Collectors.toMap(Entry::getKey, t -> {
        final ListOffsetsResultInfo info = EasyMock.createNiceMock(ListOffsetsResultInfo.class);
        expect(info.offset()).andStubReturn(t.getValue());
        EasyMock.replay(info);
        return info;
    })));
    expect(adminClient.listOffsets(anyObject())).andStubReturn(result);
    expect(result.all()).andStubReturn(allFuture);
    EasyMock.replay(result);
    return adminClient;
}
Also used : ListOffsetsResult(org.apache.kafka.clients.admin.ListOffsetsResult) Entry(java.util.Map.Entry) ListOffsetsResultInfo(org.apache.kafka.clients.admin.ListOffsetsResult.ListOffsetsResultInfo) KafkaFutureImpl(org.apache.kafka.common.internals.KafkaFutureImpl) HashMap(java.util.HashMap) Utils.entriesToMap(org.apache.kafka.common.utils.Utils.entriesToMap) Map(java.util.Map) Collections.emptyMap(java.util.Collections.emptyMap) TreeMap(java.util.TreeMap) AdminClient(org.apache.kafka.clients.admin.AdminClient)

Example 4 with ListOffsetsResult

use of org.apache.kafka.clients.admin.ListOffsetsResult in project kafka by apache.

the class ClientUtilsTest method fetchEndOffsetsShouldRethrowInterruptedExceptionAsStreamsException.

@Test
public void fetchEndOffsetsShouldRethrowInterruptedExceptionAsStreamsException() throws Exception {
    final Admin adminClient = EasyMock.createMock(AdminClient.class);
    final ListOffsetsResult result = EasyMock.createNiceMock(ListOffsetsResult.class);
    final KafkaFuture<Map<TopicPartition, ListOffsetsResultInfo>> allFuture = EasyMock.createMock(KafkaFuture.class);
    EasyMock.expect(adminClient.listOffsets(EasyMock.anyObject())).andStubReturn(result);
    EasyMock.expect(result.all()).andStubReturn(allFuture);
    EasyMock.expect(allFuture.get()).andThrow(new InterruptedException());
    replay(adminClient, result, allFuture);
    assertThrows(StreamsException.class, () -> fetchEndOffsets(PARTITIONS, adminClient));
    verify(adminClient);
}
Also used : ListOffsetsResult(org.apache.kafka.clients.admin.ListOffsetsResult) Admin(org.apache.kafka.clients.admin.Admin) Map(java.util.Map) Test(org.junit.Test)

Example 5 with ListOffsetsResult

use of org.apache.kafka.clients.admin.ListOffsetsResult in project kafka by apache.

the class ClientUtilsTest method fetchEndOffsetsShouldRethrowRuntimeExceptionAsStreamsException.

@Test
public void fetchEndOffsetsShouldRethrowRuntimeExceptionAsStreamsException() throws Exception {
    final Admin adminClient = EasyMock.createMock(AdminClient.class);
    final ListOffsetsResult result = EasyMock.createNiceMock(ListOffsetsResult.class);
    final KafkaFuture<Map<TopicPartition, ListOffsetsResultInfo>> allFuture = EasyMock.createMock(KafkaFuture.class);
    EasyMock.expect(adminClient.listOffsets(EasyMock.anyObject())).andStubReturn(result);
    EasyMock.expect(result.all()).andStubReturn(allFuture);
    EasyMock.expect(allFuture.get()).andThrow(new RuntimeException());
    replay(adminClient, result, allFuture);
    assertThrows(StreamsException.class, () -> fetchEndOffsets(PARTITIONS, adminClient));
    verify(adminClient);
}
Also used : ListOffsetsResult(org.apache.kafka.clients.admin.ListOffsetsResult) Admin(org.apache.kafka.clients.admin.Admin) Map(java.util.Map) Test(org.junit.Test)

Aggregations

Map (java.util.Map)10 ListOffsetsResult (org.apache.kafka.clients.admin.ListOffsetsResult)10 HashMap (java.util.HashMap)7 Test (org.junit.Test)6 Admin (org.apache.kafka.clients.admin.Admin)5 KafkaFutureImpl (org.apache.kafka.common.internals.KafkaFutureImpl)5 AdminClient (org.apache.kafka.clients.admin.AdminClient)4 ListOffsetsResultInfo (org.apache.kafka.clients.admin.ListOffsetsResult.ListOffsetsResultInfo)4 Collections.emptyMap (java.util.Collections.emptyMap)3 Collections.singletonMap (java.util.Collections.singletonMap)3 ExecutionException (java.util.concurrent.ExecutionException)3 TopicPartition (org.apache.kafka.common.TopicPartition)3 Duration (java.time.Duration)2 Arrays (java.util.Arrays)2 Collection (java.util.Collection)2 Collections (java.util.Collections)2 HashSet (java.util.HashSet)2 Entry (java.util.Map.Entry)2 Set (java.util.Set)2 Function (java.util.function.Function)2