Search in sources :

Example 41 with ProcessorStateException

use of org.apache.kafka.streams.errors.ProcessorStateException in project kafka by apache.

the class MeteredSessionStore method remove.

@Override
public void remove(final Windowed<K> sessionKey) {
    Objects.requireNonNull(sessionKey, "sessionKey can't be null");
    Objects.requireNonNull(sessionKey.key(), "sessionKey.key() can't be null");
    Objects.requireNonNull(sessionKey.window(), "sessionKey.window() can't be null");
    try {
        maybeMeasureLatency(() -> {
            final Bytes key = keyBytes(sessionKey.key());
            wrapped().remove(new Windowed<>(key, sessionKey.window()));
        }, time, removeSensor);
    } catch (final ProcessorStateException e) {
        final String message = String.format(e.getMessage(), sessionKey.key());
        throw new ProcessorStateException(message, e);
    }
}
Also used : Bytes(org.apache.kafka.common.utils.Bytes) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException)

Example 42 with ProcessorStateException

use of org.apache.kafka.streams.errors.ProcessorStateException in project kafka by apache.

the class AbstractTask method initializeOffsetLimits.

protected void initializeOffsetLimits() {
    for (TopicPartition partition : partitions) {
        try {
            // TODO: batch API?
            OffsetAndMetadata metadata = consumer.committed(partition);
            stateMgr.putOffsetLimit(partition, metadata != null ? metadata.offset() : 0L);
        } catch (AuthorizationException e) {
            throw new ProcessorStateException(String.format("task [%s] AuthorizationException when initializing offsets for %s", id, partition), e);
        } catch (WakeupException e) {
            throw e;
        } catch (KafkaException e) {
            throw new ProcessorStateException(String.format("task [%s] Failed to initialize offsets for %s", id, partition), e);
        }
    }
}
Also used : AuthorizationException(org.apache.kafka.common.errors.AuthorizationException) TopicPartition(org.apache.kafka.common.TopicPartition) OffsetAndMetadata(org.apache.kafka.clients.consumer.OffsetAndMetadata) KafkaException(org.apache.kafka.common.KafkaException) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException) WakeupException(org.apache.kafka.common.errors.WakeupException)

Example 43 with ProcessorStateException

use of org.apache.kafka.streams.errors.ProcessorStateException in project kafka by apache.

the class ProcessorStateManager method flush.

@Override
public void flush(final InternalProcessorContext context) {
    if (!this.stores.isEmpty()) {
        log.debug("{} Flushing all stores registered in the state manager", logPrefix);
        for (StateStore store : this.stores.values()) {
            try {
                log.trace("{} Flushing store={}", logPrefix, store.name());
                store.flush();
            } catch (Exception e) {
                throw new ProcessorStateException(String.format("%s Failed to flush state store %s", logPrefix, store.name()), e);
            }
        }
    }
}
Also used : StateStore(org.apache.kafka.streams.processor.StateStore) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException) IOException(java.io.IOException) StreamsException(org.apache.kafka.streams.errors.StreamsException) LockException(org.apache.kafka.streams.errors.LockException)

Example 44 with ProcessorStateException

use of org.apache.kafka.streams.errors.ProcessorStateException in project kafka by apache.

the class RocksDBStore method putAll.

@Override
public void putAll(List<KeyValue<K, V>> entries) {
    try (WriteBatch batch = new WriteBatch()) {
        for (KeyValue<K, V> entry : entries) {
            final byte[] rawKey = serdes.rawKey(entry.key);
            if (entry.value == null) {
                db.delete(rawKey);
            } else {
                final byte[] value = serdes.rawValue(entry.value);
                batch.put(rawKey, value);
            }
        }
        db.write(wOptions, batch);
    } catch (RocksDBException e) {
        throw new ProcessorStateException("Error while batch writing to store " + this.name, e);
    }
}
Also used : RocksDBException(org.rocksdb.RocksDBException) WriteBatch(org.rocksdb.WriteBatch) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException)

Example 45 with ProcessorStateException

use of org.apache.kafka.streams.errors.ProcessorStateException in project apache-kafka-on-k8s by banzaicloud.

the class RocksDBStore method approximateNumEntries.

/**
 * Return an approximate count of key-value mappings in this store.
 *
 * <code>RocksDB</code> cannot return an exact entry count without doing a
 * full scan, so this method relies on the <code>rocksdb.estimate-num-keys</code>
 * property to get an approximate count. The returned size also includes
 * a count of dirty keys in the store's in-memory cache, which may lead to some
 * double-counting of entries and inflate the estimate.
 *
 * @return an approximate count of key-value mappings in the store.
 */
@Override
public long approximateNumEntries() {
    validateStoreOpen();
    final long value;
    try {
        value = this.db.getLongProperty("rocksdb.estimate-num-keys");
    } catch (final RocksDBException e) {
        throw new ProcessorStateException("Error fetching property from store " + this.name, e);
    }
    if (isOverflowing(value)) {
        return Long.MAX_VALUE;
    }
    return value;
}
Also used : RocksDBException(org.rocksdb.RocksDBException) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException)

Aggregations

ProcessorStateException (org.apache.kafka.streams.errors.ProcessorStateException)68 Test (org.junit.Test)23 IOException (java.io.IOException)19 File (java.io.File)15 RocksDBException (org.rocksdb.RocksDBException)11 StreamsException (org.apache.kafka.streams.errors.StreamsException)7 StateStore (org.apache.kafka.streams.processor.StateStore)7 ArrayList (java.util.ArrayList)6 HashMap (java.util.HashMap)6 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)5 WriteBatch (org.rocksdb.WriteBatch)5 ParseException (java.text.ParseException)4 Map (java.util.Map)4 MetricName (org.apache.kafka.common.MetricName)4 TopicPartition (org.apache.kafka.common.TopicPartition)4 LockException (org.apache.kafka.streams.errors.LockException)4 MockKeyValueStore (org.apache.kafka.test.MockKeyValueStore)4 MockStateStore (org.apache.kafka.test.MockStateStore)4 PrepareForTest (org.powermock.core.classloader.annotations.PrepareForTest)4 ConsumerRecord (org.apache.kafka.clients.consumer.ConsumerRecord)3