Search in sources :

Example 1 with ProcessorStateException

use of org.apache.kafka.streams.errors.ProcessorStateException in project kafka by apache.

the class GlobalStateManagerImpl method flush.

public void flush(final InternalProcessorContext context) {
    log.debug("Flushing all global stores registered in the state manager");
    for (StateStore store : this.stores.values()) {
        try {
            log.trace("Flushing global store={}", store.name());
            store.flush();
        } catch (Exception e) {
            throw new ProcessorStateException(String.format("Failed to flush global state store %s", store.name()), e);
        }
    }
}
Also used : StateStore(org.apache.kafka.streams.processor.StateStore) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException) IOException(java.io.IOException) StreamsException(org.apache.kafka.streams.errors.StreamsException) LockException(org.apache.kafka.streams.errors.LockException)

Example 2 with ProcessorStateException

use of org.apache.kafka.streams.errors.ProcessorStateException in project kafka by apache.

the class ProcessorStateManager method updateStandbyStates.

public List<ConsumerRecord<byte[], byte[]>> updateStandbyStates(TopicPartition storePartition, List<ConsumerRecord<byte[], byte[]>> records) {
    long limit = offsetLimit(storePartition);
    List<ConsumerRecord<byte[], byte[]>> remainingRecords = null;
    // restore states from changelog records
    StateRestoreCallback restoreCallback = restoreCallbacks.get(storePartition.topic());
    long lastOffset = -1L;
    int count = 0;
    for (ConsumerRecord<byte[], byte[]> record : records) {
        if (record.offset() < limit) {
            try {
                restoreCallback.restore(record.key(), record.value());
            } catch (Exception e) {
                throw new ProcessorStateException(String.format("%s exception caught while trying to restore state from %s", logPrefix, storePartition), e);
            }
            lastOffset = record.offset();
        } else {
            if (remainingRecords == null)
                remainingRecords = new ArrayList<>(records.size() - count);
            remainingRecords.add(record);
        }
        count++;
    }
    // record the restored offset for its change log partition
    restoredOffsets.put(storePartition, lastOffset + 1);
    return remainingRecords;
}
Also used : StateRestoreCallback(org.apache.kafka.streams.processor.StateRestoreCallback) ArrayList(java.util.ArrayList) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException) ConsumerRecord(org.apache.kafka.clients.consumer.ConsumerRecord) OffsetCheckpoint(org.apache.kafka.streams.state.internals.OffsetCheckpoint) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException) IOException(java.io.IOException) StreamsException(org.apache.kafka.streams.errors.StreamsException) LockException(org.apache.kafka.streams.errors.LockException)

Example 3 with ProcessorStateException

use of org.apache.kafka.streams.errors.ProcessorStateException in project kafka by apache.

the class Segments method openExisting.

void openExisting(final ProcessorContext context) {
    try {
        File dir = new File(context.stateDir(), name);
        if (dir.exists()) {
            String[] list = dir.list();
            if (list != null) {
                long[] segmentIds = new long[list.length];
                for (int i = 0; i < list.length; i++) segmentIds[i] = segmentIdFromSegmentName(list[i]);
                // open segments in the id order
                Arrays.sort(segmentIds);
                for (long segmentId : segmentIds) {
                    if (segmentId >= 0) {
                        getOrCreateSegment(segmentId, context);
                    }
                }
            }
        } else {
            if (!dir.mkdir()) {
                throw new ProcessorStateException(String.format("dir %s doesn't exist and cannot be created for segments %s", dir, name));
            }
        }
    } catch (Exception ex) {
    // ignore
    }
}
Also used : File(java.io.File) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException) InvalidStateStoreException(org.apache.kafka.streams.errors.InvalidStateStoreException)

Example 4 with ProcessorStateException

use of org.apache.kafka.streams.errors.ProcessorStateException in project kafka by apache.

the class GlobalStateManagerImplTest method shouldAttemptToCloseAllStoresEvenWhenSomeException.

@Test
public void shouldAttemptToCloseAllStoresEvenWhenSomeException() throws Exception {
    stateManager.initialize(context);
    initializeConsumer(1, 1, t1);
    initializeConsumer(1, 1, t2);
    final NoOpReadOnlyStore store = new NoOpReadOnlyStore("t1-store") {

        @Override
        public void close() {
            super.close();
            throw new RuntimeException("KABOOM!");
        }
    };
    stateManager.register(store, false, stateRestoreCallback);
    stateManager.register(store2, false, stateRestoreCallback);
    try {
        stateManager.close(Collections.<TopicPartition, Long>emptyMap());
    } catch (ProcessorStateException e) {
    // expected
    }
    assertFalse(store.isOpen());
    assertFalse(store2.isOpen());
}
Also used : NoOpReadOnlyStore(org.apache.kafka.test.NoOpReadOnlyStore) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException) Test(org.junit.Test)

Example 5 with ProcessorStateException

use of org.apache.kafka.streams.errors.ProcessorStateException in project apache-kafka-on-k8s by banzaicloud.

the class RocksDBStore method toggleDbForBulkLoading.

private void toggleDbForBulkLoading(final boolean prepareForBulkload) {
    if (prepareForBulkload) {
        // if the store is not empty, we need to compact to get around the num.levels check
        // for bulk loading
        final String[] sstFileNames = dbDir.list(new FilenameFilter() {

            @Override
            public boolean accept(final File dir, final String name) {
                return name.matches(".*\\.sst");
            }
        });
        if (sstFileNames != null && sstFileNames.length > 0) {
            try {
                this.db.compactRange(true, 1, 0);
            } catch (final RocksDBException e) {
                throw new ProcessorStateException("Error while range compacting during restoring  store " + this.name, e);
            }
            // we need to re-open with the old num.levels again, this is a workaround
            // until https://github.com/facebook/rocksdb/pull/2740 is merged in rocksdb
            close();
            openDB(internalProcessorContext);
        }
    }
    close();
    this.prepareForBulkload = prepareForBulkload;
    openDB(internalProcessorContext);
}
Also used : FilenameFilter(java.io.FilenameFilter) RocksDBException(org.rocksdb.RocksDBException) File(java.io.File) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException)

Aggregations

ProcessorStateException (org.apache.kafka.streams.errors.ProcessorStateException)68 Test (org.junit.Test)23 IOException (java.io.IOException)19 File (java.io.File)15 RocksDBException (org.rocksdb.RocksDBException)11 StreamsException (org.apache.kafka.streams.errors.StreamsException)7 StateStore (org.apache.kafka.streams.processor.StateStore)7 ArrayList (java.util.ArrayList)6 HashMap (java.util.HashMap)6 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)5 WriteBatch (org.rocksdb.WriteBatch)5 ParseException (java.text.ParseException)4 Map (java.util.Map)4 MetricName (org.apache.kafka.common.MetricName)4 TopicPartition (org.apache.kafka.common.TopicPartition)4 LockException (org.apache.kafka.streams.errors.LockException)4 MockKeyValueStore (org.apache.kafka.test.MockKeyValueStore)4 MockStateStore (org.apache.kafka.test.MockStateStore)4 PrepareForTest (org.powermock.core.classloader.annotations.PrepareForTest)4 ConsumerRecord (org.apache.kafka.clients.consumer.ConsumerRecord)3