Search in sources :

Example 11 with LogCaptureAppender

use of org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender in project kafka by apache.

the class AdjustStreamThreadCountTest method shouldResizeCacheAfterThreadRemovalTimesOut.

@Test
public void shouldResizeCacheAfterThreadRemovalTimesOut() throws InterruptedException {
    final long totalCacheBytes = 10L;
    final Properties props = new Properties();
    props.putAll(properties);
    props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 2);
    props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, totalCacheBytes);
    try (final KafkaStreams kafkaStreams = new KafkaStreams(builder.build(), props)) {
        addStreamStateChangeListener(kafkaStreams);
        startStreamsAndWaitForRunning(kafkaStreams);
        try (final LogCaptureAppender appender = LogCaptureAppender.createAndRegister(KafkaStreams.class)) {
            assertThrows(TimeoutException.class, () -> kafkaStreams.removeStreamThread(Duration.ofSeconds(0)));
            for (final String log : appender.getMessages()) {
                // all 10 bytes should be available for remaining thread
                if (log.endsWith("Resizing thread cache due to thread removal, new cache size per thread is 10")) {
                    return;
                }
            }
        }
    }
    fail();
}
Also used : KafkaStreams(org.apache.kafka.streams.KafkaStreams) LogCaptureAppender(org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender) Utils.mkObjectProperties(org.apache.kafka.common.utils.Utils.mkObjectProperties) Properties(java.util.Properties) IntegrationTest(org.apache.kafka.test.IntegrationTest) Test(org.junit.Test)

Example 12 with LogCaptureAppender

use of org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender in project kafka by apache.

the class StreamsConfigTest method shouldLogWarningWhenRetriesIsUsed.

@SuppressWarnings("deprecation")
@Test
public void shouldLogWarningWhenRetriesIsUsed() {
    props.put(StreamsConfig.RETRIES_CONFIG, 0);
    LogCaptureAppender.setClassLoggerToDebug(StreamsConfig.class);
    try (final LogCaptureAppender appender = LogCaptureAppender.createAndRegister(StreamsConfig.class)) {
        new StreamsConfig(props);
        assertThat(appender.getMessages(), hasItem("Configuration parameter `" + StreamsConfig.RETRIES_CONFIG + "` is deprecated and will be removed in the 4.0.0 release."));
    }
}
Also used : LogCaptureAppender(org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender) StreamsTestUtils.getStreamsConfig(org.apache.kafka.test.StreamsTestUtils.getStreamsConfig) Test(org.junit.Test)

Example 13 with LogCaptureAppender

use of org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender in project kafka by apache.

the class StoreChangelogReaderTest method shouldNotThrowOnUnknownRevokedPartition.

@Test
public void shouldNotThrowOnUnknownRevokedPartition() {
    LogCaptureAppender.setClassLoggerToDebug(StoreChangelogReader.class);
    try (final LogCaptureAppender appender = LogCaptureAppender.createAndRegister(StoreChangelogReader.class)) {
        changelogReader.unregister(Collections.singletonList(new TopicPartition("unknown", 0)));
        assertThat(appender.getMessages(), hasItem("test-reader Changelog partition unknown-0 could not be found," + " it could be already cleaned up during the handling of task corruption and never restore again"));
    }
}
Also used : TopicPartition(org.apache.kafka.common.TopicPartition) LogCaptureAppender(org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender) Test(org.junit.Test)

Example 14 with LogCaptureAppender

use of org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender in project kafka by apache.

the class InternalTopicManagerTest method shouldLogWhenTopicNotFoundAndNotThrowException.

@Test
public void shouldLogWhenTopicNotFoundAndNotThrowException() {
    mockAdminClient.addTopic(false, topic1, Collections.singletonList(new TopicPartitionInfo(0, broker1, cluster, Collections.emptyList())), null);
    final InternalTopicConfig internalTopicConfig = new RepartitionTopicConfig(topic1, Collections.emptyMap());
    internalTopicConfig.setNumberOfPartitions(1);
    final InternalTopicConfig internalTopicConfigII = new RepartitionTopicConfig("internal-topic", Collections.emptyMap());
    internalTopicConfigII.setNumberOfPartitions(1);
    final Map<String, InternalTopicConfig> topicConfigMap = new HashMap<>();
    topicConfigMap.put(topic1, internalTopicConfig);
    topicConfigMap.put("internal-topic", internalTopicConfigII);
    LogCaptureAppender.setClassLoggerToDebug(InternalTopicManager.class);
    try (final LogCaptureAppender appender = LogCaptureAppender.createAndRegister(InternalTopicManager.class)) {
        internalTopicManager.makeReady(topicConfigMap);
        assertThat(appender.getMessages(), hasItem("stream-thread [" + threadName + "] Topic internal-topic is unknown or not found, hence not existed yet.\n" + "Error message was: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: Topic internal-topic not found."));
    }
}
Also used : TopicPartitionInfo(org.apache.kafka.common.TopicPartitionInfo) HashMap(java.util.HashMap) LogCaptureAppender(org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender) Test(org.junit.Test)

Example 15 with LogCaptureAppender

use of org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender in project kafka by apache.

the class StateDirectoryTest method shouldLogManualUserCallMessage.

@Test
public void shouldLogManualUserCallMessage() {
    final TaskId taskId = new TaskId(0, 0);
    final File taskDirectory = directory.getOrCreateDirectoryForTask(taskId);
    final File testFile = new File(taskDirectory, "testFile");
    assertThat(testFile.mkdir(), is(true));
    assertThat(directory.directoryForTaskIsEmpty(taskId), is(false));
    try (final LogCaptureAppender appender = LogCaptureAppender.createAndRegister(StateDirectory.class)) {
        directory.clean();
        assertThat(appender.getMessages(), hasItem(endsWith("as user calling cleanup.")));
    }
}
Also used : TaskId(org.apache.kafka.streams.processor.TaskId) LogCaptureAppender(org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender) File(java.io.File) Test(org.junit.Test)

Aggregations

LogCaptureAppender (org.apache.kafka.streams.processor.internals.testutil.LogCaptureAppender)66 Test (org.junit.Test)65 Windowed (org.apache.kafka.streams.kstream.Windowed)16 Bytes (org.apache.kafka.common.utils.Bytes)14 Properties (java.util.Properties)13 StreamsBuilder (org.apache.kafka.streams.StreamsBuilder)13 MetricName (org.apache.kafka.common.MetricName)11 StringSerializer (org.apache.kafka.common.serialization.StringSerializer)10 StreamsConfig (org.apache.kafka.streams.StreamsConfig)10 TopologyTestDriver (org.apache.kafka.streams.TopologyTestDriver)10 File (java.io.File)8 Serdes (org.apache.kafka.common.serialization.Serdes)8 MatcherAssert.assertThat (org.hamcrest.MatcherAssert.assertThat)8 TopicPartition (org.apache.kafka.common.TopicPartition)7 StreamsTestUtils (org.apache.kafka.test.StreamsTestUtils)7 CoreMatchers.hasItem (org.hamcrest.CoreMatchers.hasItem)7 Duration (java.time.Duration)6 StringDeserializer (org.apache.kafka.common.serialization.StringDeserializer)6 KeyValueTimestamp (org.apache.kafka.streams.KeyValueTimestamp)6 Consumed (org.apache.kafka.streams.kstream.Consumed)6