Search in sources :

Example 1 with StreamsMetrics

use of org.apache.kafka.streams.StreamsMetrics in project kafka by apache.

the class StreamTaskTest method shouldCheckpointOffsetsOnCommit.

@SuppressWarnings("unchecked")
@Test
public void shouldCheckpointOffsetsOnCommit() throws Exception {
    final String storeName = "test";
    final String changelogTopic = ProcessorStateManager.storeChangelogTopic("appId", storeName);
    final InMemoryKeyValueStore inMemoryStore = new InMemoryKeyValueStore(storeName, null, null) {

        @Override
        public void init(final ProcessorContext context, final StateStore root) {
            context.register(root, true, null);
        }

        @Override
        public boolean persistent() {
            return true;
        }
    };
    final ProcessorTopology topology = new ProcessorTopology(Collections.<ProcessorNode>emptyList(), Collections.<String, SourceNode>emptyMap(), Collections.<String, SinkNode>emptyMap(), Collections.<StateStore>singletonList(inMemoryStore), Collections.singletonMap(storeName, changelogTopic), Collections.<StateStore>emptyList());
    final TopicPartition partition = new TopicPartition(changelogTopic, 0);
    final NoOpRecordCollector recordCollector = new NoOpRecordCollector() {

        @Override
        public Map<TopicPartition, Long> offsets() {
            return Collections.singletonMap(partition, 543L);
        }
    };
    restoreStateConsumer.updatePartitions(changelogTopic, Collections.singletonList(new PartitionInfo(changelogTopic, 0, null, new Node[0], new Node[0])));
    restoreStateConsumer.updateEndOffsets(Collections.singletonMap(partition, 0L));
    restoreStateConsumer.updateBeginningOffsets(Collections.singletonMap(partition, 0L));
    final StreamsMetrics streamsMetrics = new MockStreamsMetrics(new Metrics());
    final TaskId taskId = new TaskId(0, 0);
    final MockTime time = new MockTime();
    final StreamsConfig config = createConfig(baseDir);
    final StreamTask streamTask = new StreamTask(taskId, "appId", partitions, topology, consumer, changelogReader, config, streamsMetrics, stateDirectory, new ThreadCache("testCache", 0, streamsMetrics), time, recordCollector);
    time.sleep(config.getLong(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG));
    streamTask.commit();
    final OffsetCheckpoint checkpoint = new OffsetCheckpoint(new File(stateDirectory.directoryForTask(taskId), ProcessorStateManager.CHECKPOINT_FILE_NAME));
    assertThat(checkpoint.read(), equalTo(Collections.singletonMap(partition, 544L)));
}
Also used : OffsetCheckpoint(org.apache.kafka.streams.state.internals.OffsetCheckpoint) TaskId(org.apache.kafka.streams.processor.TaskId) NoOpRecordCollector(org.apache.kafka.test.NoOpRecordCollector) StateStore(org.apache.kafka.streams.processor.StateStore) ProcessorContext(org.apache.kafka.streams.processor.ProcessorContext) NoOpProcessorContext(org.apache.kafka.test.NoOpProcessorContext) Metrics(org.apache.kafka.common.metrics.Metrics) StreamsMetrics(org.apache.kafka.streams.StreamsMetrics) TopicPartition(org.apache.kafka.common.TopicPartition) ThreadCache(org.apache.kafka.streams.state.internals.ThreadCache) PartitionInfo(org.apache.kafka.common.PartitionInfo) StreamsMetrics(org.apache.kafka.streams.StreamsMetrics) File(java.io.File) InMemoryKeyValueStore(org.apache.kafka.streams.state.internals.InMemoryKeyValueStore) MockTime(org.apache.kafka.common.utils.MockTime) StreamsConfig(org.apache.kafka.streams.StreamsConfig) Test(org.junit.Test)

Example 2 with StreamsMetrics

use of org.apache.kafka.streams.StreamsMetrics in project kafka by apache.

the class StreamTaskTest method shouldFlushRecordCollectorOnFlushState.

@Test
public void shouldFlushRecordCollectorOnFlushState() throws Exception {
    final AtomicBoolean flushed = new AtomicBoolean(false);
    final NoOpRecordCollector recordCollector = new NoOpRecordCollector() {

        @Override
        public void flush() {
            flushed.set(true);
        }
    };
    final StreamsMetrics streamsMetrics = new MockStreamsMetrics(new Metrics());
    final StreamTask streamTask = new StreamTask(taskId00, "appId", partitions, topology, consumer, changelogReader, createConfig(baseDir), streamsMetrics, stateDirectory, testCache, time, recordCollector);
    streamTask.flushState();
    assertTrue(flushed.get());
}
Also used : AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) Metrics(org.apache.kafka.common.metrics.Metrics) StreamsMetrics(org.apache.kafka.streams.StreamsMetrics) NoOpRecordCollector(org.apache.kafka.test.NoOpRecordCollector) StreamsMetrics(org.apache.kafka.streams.StreamsMetrics) Test(org.junit.Test)

Example 3 with StreamsMetrics

use of org.apache.kafka.streams.StreamsMetrics in project kafka by apache.

the class RocksDBWindowStoreSupplierTest method shouldHaveMeteredStoreWhenNotLoggedOrCached.

@SuppressWarnings("unchecked")
@Test
public void shouldHaveMeteredStoreWhenNotLoggedOrCached() throws Exception {
    store = createStore(false, false);
    store.init(context, store);
    final StreamsMetrics metrics = context.metrics();
    assertFalse(metrics.metrics().isEmpty());
}
Also used : MockStreamsMetrics(org.apache.kafka.streams.processor.internals.MockStreamsMetrics) StreamsMetrics(org.apache.kafka.streams.StreamsMetrics) Test(org.junit.Test)

Example 4 with StreamsMetrics

use of org.apache.kafka.streams.StreamsMetrics in project kafka by apache.

the class RocksDBSessionStoreSupplierTest method shouldHaveMeteredStoreWhenNotLoggedOrCached.

@SuppressWarnings("unchecked")
@Test
public void shouldHaveMeteredStoreWhenNotLoggedOrCached() throws Exception {
    store = createStore(false, false);
    store.init(context, store);
    final StreamsMetrics metrics = context.metrics();
    assertFalse(metrics.metrics().isEmpty());
}
Also used : MockStreamsMetrics(org.apache.kafka.streams.processor.internals.MockStreamsMetrics) StreamsMetrics(org.apache.kafka.streams.StreamsMetrics) Test(org.junit.Test)

Example 5 with StreamsMetrics

use of org.apache.kafka.streams.StreamsMetrics in project kafka by apache.

the class RocksDBSessionStoreSupplierTest method shouldHaveMeteredStoreWhenCached.

@SuppressWarnings("unchecked")
@Test
public void shouldHaveMeteredStoreWhenCached() throws Exception {
    store = createStore(false, true);
    store.init(context, store);
    final StreamsMetrics metrics = context.metrics();
    assertFalse(metrics.metrics().isEmpty());
}
Also used : MockStreamsMetrics(org.apache.kafka.streams.processor.internals.MockStreamsMetrics) StreamsMetrics(org.apache.kafka.streams.StreamsMetrics) Test(org.junit.Test)

Aggregations

StreamsMetrics (org.apache.kafka.streams.StreamsMetrics)21 Test (org.junit.Test)19 MockStreamsMetrics (org.apache.kafka.streams.processor.internals.MockStreamsMetrics)16 Metrics (org.apache.kafka.common.metrics.Metrics)5 NoOpRecordCollector (org.apache.kafka.test.NoOpRecordCollector)5 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)2 MetricName (org.apache.kafka.common.MetricName)2 Sensor (org.apache.kafka.common.metrics.Sensor)2 LogContext (org.apache.kafka.common.utils.LogContext)2 Before (org.junit.Before)2 File (java.io.File)1 PartitionInfo (org.apache.kafka.common.PartitionInfo)1 TopicPartition (org.apache.kafka.common.TopicPartition)1 MockTime (org.apache.kafka.common.utils.MockTime)1 StreamsConfig (org.apache.kafka.streams.StreamsConfig)1 ProductionExceptionHandler (org.apache.kafka.streams.errors.ProductionExceptionHandler)1 ProcessorContext (org.apache.kafka.streams.processor.ProcessorContext)1 StateStore (org.apache.kafka.streams.processor.StateStore)1 TaskId (org.apache.kafka.streams.processor.TaskId)1 InMemoryKeyValueStore (org.apache.kafka.streams.state.internals.InMemoryKeyValueStore)1