Search in sources :

Example 6 with TimestampedKeyValueStore

use of org.apache.kafka.streams.state.TimestampedKeyValueStore in project kafka by apache.

the class TimestampedKeyValueStoreMaterializer method materialize.

/**
 * @return  StoreBuilder
 */
public StoreBuilder<TimestampedKeyValueStore<K, V>> materialize() {
    KeyValueBytesStoreSupplier supplier = (KeyValueBytesStoreSupplier) materialized.storeSupplier();
    if (supplier == null) {
        final String name = materialized.storeName();
        supplier = Stores.persistentTimestampedKeyValueStore(name);
    }
    final StoreBuilder<TimestampedKeyValueStore<K, V>> builder = Stores.timestampedKeyValueStoreBuilder(supplier, materialized.keySerde(), materialized.valueSerde());
    if (materialized.loggingEnabled()) {
        builder.withLoggingEnabled(materialized.logConfig());
    } else {
        builder.withLoggingDisabled();
    }
    if (materialized.cachingEnabled()) {
        builder.withCachingEnabled();
    }
    return builder;
}
Also used : KeyValueBytesStoreSupplier(org.apache.kafka.streams.state.KeyValueBytesStoreSupplier) TimestampedKeyValueStore(org.apache.kafka.streams.state.TimestampedKeyValueStore)

Example 7 with TimestampedKeyValueStore

use of org.apache.kafka.streams.state.TimestampedKeyValueStore in project kafka by apache.

the class SubscriptionStoreReceiveProcessorSupplier method get.

@Override
public Processor<KO, SubscriptionWrapper<K>, CombinedKey<KO, K>, Change<ValueAndTimestamp<SubscriptionWrapper<K>>>> get() {
    return new ContextualProcessor<KO, SubscriptionWrapper<K>, CombinedKey<KO, K>, Change<ValueAndTimestamp<SubscriptionWrapper<K>>>>() {

        private TimestampedKeyValueStore<Bytes, SubscriptionWrapper<K>> store;

        private Sensor droppedRecordsSensor;

        @Override
        public void init(final ProcessorContext<CombinedKey<KO, K>, Change<ValueAndTimestamp<SubscriptionWrapper<K>>>> context) {
            super.init(context);
            final InternalProcessorContext<?, ?> internalProcessorContext = (InternalProcessorContext<?, ?>) context;
            droppedRecordsSensor = TaskMetrics.droppedRecordsSensor(Thread.currentThread().getName(), internalProcessorContext.taskId().toString(), internalProcessorContext.metrics());
            store = internalProcessorContext.getStateStore(storeBuilder);
            keySchema.init(context);
        }

        @Override
        public void process(final Record<KO, SubscriptionWrapper<K>> record) {
            if (record.key() == null) {
                if (context().recordMetadata().isPresent()) {
                    final RecordMetadata recordMetadata = context().recordMetadata().get();
                    LOG.warn("Skipping record due to null foreign key. " + "topic=[{}] partition=[{}] offset=[{}]", recordMetadata.topic(), recordMetadata.partition(), recordMetadata.offset());
                } else {
                    LOG.warn("Skipping record due to null foreign key. Topic, partition, and offset not known.");
                }
                droppedRecordsSensor.record();
                return;
            }
            if (record.value().getVersion() != SubscriptionWrapper.CURRENT_VERSION) {
                // from older SubscriptionWrapper versions to newer versions.
                throw new UnsupportedVersionException("SubscriptionWrapper is of an incompatible version.");
            }
            final Bytes subscriptionKey = keySchema.toBytes(record.key(), record.value().getPrimaryKey());
            final ValueAndTimestamp<SubscriptionWrapper<K>> newValue = ValueAndTimestamp.make(record.value(), record.timestamp());
            final ValueAndTimestamp<SubscriptionWrapper<K>> oldValue = store.get(subscriptionKey);
            // This store is used by the prefix scanner in ForeignJoinSubscriptionProcessorSupplier
            if (record.value().getInstruction().equals(SubscriptionWrapper.Instruction.DELETE_KEY_AND_PROPAGATE) || record.value().getInstruction().equals(SubscriptionWrapper.Instruction.DELETE_KEY_NO_PROPAGATE)) {
                store.delete(subscriptionKey);
            } else {
                store.put(subscriptionKey, newValue);
            }
            final Change<ValueAndTimestamp<SubscriptionWrapper<K>>> change = new Change<>(newValue, oldValue);
            // note: key is non-nullable
            // note: newValue is non-nullable
            context().forward(record.withKey(new CombinedKey<>(record.key(), record.value().getPrimaryKey())).withValue(change).withTimestamp(newValue.timestamp()));
        }
    };
}
Also used : InternalProcessorContext(org.apache.kafka.streams.processor.internals.InternalProcessorContext) Change(org.apache.kafka.streams.kstream.internals.Change) ProcessorContext(org.apache.kafka.streams.processor.api.ProcessorContext) InternalProcessorContext(org.apache.kafka.streams.processor.internals.InternalProcessorContext) ValueAndTimestamp(org.apache.kafka.streams.state.ValueAndTimestamp) RecordMetadata(org.apache.kafka.streams.processor.api.RecordMetadata) Bytes(org.apache.kafka.common.utils.Bytes) TimestampedKeyValueStore(org.apache.kafka.streams.state.TimestampedKeyValueStore) Record(org.apache.kafka.streams.processor.api.Record) ContextualProcessor(org.apache.kafka.streams.processor.api.ContextualProcessor) Sensor(org.apache.kafka.common.metrics.Sensor) UnsupportedVersionException(org.apache.kafka.common.errors.UnsupportedVersionException)

Example 8 with TimestampedKeyValueStore

use of org.apache.kafka.streams.state.TimestampedKeyValueStore in project kafka by apache.

the class KTableImpl method doTransformValues.

private <VR> KTable<K, VR> doTransformValues(final ValueTransformerWithKeySupplier<? super K, ? super V, ? extends VR> transformerSupplier, final MaterializedInternal<K, VR, KeyValueStore<Bytes, byte[]>> materializedInternal, final NamedInternal namedInternal, final String... stateStoreNames) {
    Objects.requireNonNull(stateStoreNames, "stateStoreNames");
    final Serde<K> keySerde;
    final Serde<VR> valueSerde;
    final String queryableStoreName;
    final StoreBuilder<TimestampedKeyValueStore<K, VR>> storeBuilder;
    if (materializedInternal != null) {
        // don't inherit parent value serde, since this operation may change the value type, more specifically:
        // we preserve the key following the order of 1) materialized, 2) parent, 3) null
        keySerde = materializedInternal.keySerde() != null ? materializedInternal.keySerde() : this.keySerde;
        // we preserve the value following the order of 1) materialized, 2) null
        valueSerde = materializedInternal.valueSerde();
        queryableStoreName = materializedInternal.queryableStoreName();
        // only materialize if materialized is specified and it has queryable name
        storeBuilder = queryableStoreName != null ? (new TimestampedKeyValueStoreMaterializer<>(materializedInternal)).materialize() : null;
    } else {
        keySerde = this.keySerde;
        valueSerde = null;
        queryableStoreName = null;
        storeBuilder = null;
    }
    final String name = namedInternal.orElseGenerateWithPrefix(builder, TRANSFORMVALUES_NAME);
    final KTableProcessorSupplier<K, V, K, VR> processorSupplier = new KTableTransformValues<>(this, transformerSupplier, queryableStoreName);
    final ProcessorParameters<K, VR, ?, ?> processorParameters = unsafeCastProcessorParametersToCompletelyDifferentType(new ProcessorParameters<>(processorSupplier, name));
    final GraphNode tableNode = new TableProcessorNode<>(name, processorParameters, storeBuilder, stateStoreNames);
    builder.addGraphNode(this.graphNode, tableNode);
    return new KTableImpl<>(name, keySerde, valueSerde, subTopologySourceNodes, queryableStoreName, processorSupplier, tableNode, builder);
}
Also used : GraphNode(org.apache.kafka.streams.kstream.internals.graph.GraphNode) ProcessorGraphNode(org.apache.kafka.streams.kstream.internals.graph.ProcessorGraphNode) TableProcessorNode(org.apache.kafka.streams.kstream.internals.graph.TableProcessorNode) TimestampedKeyValueStore(org.apache.kafka.streams.state.TimestampedKeyValueStore)

Example 9 with TimestampedKeyValueStore

use of org.apache.kafka.streams.state.TimestampedKeyValueStore in project kafka by apache.

the class KTableImpl method doMapValues.

private <VR> KTable<K, VR> doMapValues(final ValueMapperWithKey<? super K, ? super V, ? extends VR> mapper, final Named named, final MaterializedInternal<K, VR, KeyValueStore<Bytes, byte[]>> materializedInternal) {
    final Serde<K> keySerde;
    final Serde<VR> valueSerde;
    final String queryableStoreName;
    final StoreBuilder<TimestampedKeyValueStore<K, VR>> storeBuilder;
    if (materializedInternal != null) {
        // materialize the store; but we still need to burn one index BEFORE generating the processor to keep compatibility.
        if (materializedInternal.storeName() == null) {
            builder.newStoreName(MAPVALUES_NAME);
        }
        keySerde = materializedInternal.keySerde() != null ? materializedInternal.keySerde() : this.keySerde;
        valueSerde = materializedInternal.valueSerde();
        queryableStoreName = materializedInternal.queryableStoreName();
        // only materialize if materialized is specified and it has queryable name
        storeBuilder = queryableStoreName != null ? (new TimestampedKeyValueStoreMaterializer<>(materializedInternal)).materialize() : null;
    } else {
        keySerde = this.keySerde;
        valueSerde = null;
        queryableStoreName = null;
        storeBuilder = null;
    }
    final String name = new NamedInternal(named).orElseGenerateWithPrefix(builder, MAPVALUES_NAME);
    final KTableProcessorSupplier<K, V, K, VR> processorSupplier = new KTableMapValues<>(this, mapper, queryableStoreName);
    // leaving in calls to ITB until building topology with graph
    final ProcessorParameters<K, VR, ?, ?> processorParameters = unsafeCastProcessorParametersToCompletelyDifferentType(new ProcessorParameters<>(processorSupplier, name));
    final GraphNode tableNode = new TableProcessorNode<>(name, processorParameters, storeBuilder);
    builder.addGraphNode(this.graphNode, tableNode);
    // we preserve the value following the order of 1) materialized, 2) null
    return new KTableImpl<>(name, keySerde, valueSerde, subTopologySourceNodes, queryableStoreName, processorSupplier, tableNode, builder);
}
Also used : GraphNode(org.apache.kafka.streams.kstream.internals.graph.GraphNode) ProcessorGraphNode(org.apache.kafka.streams.kstream.internals.graph.ProcessorGraphNode) TableProcessorNode(org.apache.kafka.streams.kstream.internals.graph.TableProcessorNode) TimestampedKeyValueStore(org.apache.kafka.streams.state.TimestampedKeyValueStore)

Example 10 with TimestampedKeyValueStore

use of org.apache.kafka.streams.state.TimestampedKeyValueStore in project kafka by apache.

the class KTableImpl method doFilter.

private KTable<K, V> doFilter(final Predicate<? super K, ? super V> predicate, final Named named, final MaterializedInternal<K, V, KeyValueStore<Bytes, byte[]>> materializedInternal, final boolean filterNot) {
    final Serde<K> keySerde;
    final Serde<V> valueSerde;
    final String queryableStoreName;
    final StoreBuilder<TimestampedKeyValueStore<K, V>> storeBuilder;
    if (materializedInternal != null) {
        // materialize the store; but we still need to burn one index BEFORE generating the processor to keep compatibility.
        if (materializedInternal.storeName() == null) {
            builder.newStoreName(FILTER_NAME);
        }
        // we can inherit parent key and value serde if user do not provide specific overrides, more specifically:
        // we preserve the key following the order of 1) materialized, 2) parent
        keySerde = materializedInternal.keySerde() != null ? materializedInternal.keySerde() : this.keySerde;
        // we preserve the value following the order of 1) materialized, 2) parent
        valueSerde = materializedInternal.valueSerde() != null ? materializedInternal.valueSerde() : this.valueSerde;
        queryableStoreName = materializedInternal.queryableStoreName();
        // only materialize if materialized is specified and it has queryable name
        storeBuilder = queryableStoreName != null ? (new TimestampedKeyValueStoreMaterializer<>(materializedInternal)).materialize() : null;
    } else {
        keySerde = this.keySerde;
        valueSerde = this.valueSerde;
        queryableStoreName = null;
        storeBuilder = null;
    }
    final String name = new NamedInternal(named).orElseGenerateWithPrefix(builder, FILTER_NAME);
    final KTableProcessorSupplier<K, V, K, V> processorSupplier = new KTableFilter<>(this, predicate, filterNot, queryableStoreName);
    final ProcessorParameters<K, V, ?, ?> processorParameters = unsafeCastProcessorParametersToCompletelyDifferentType(new ProcessorParameters<>(processorSupplier, name));
    final GraphNode tableNode = new TableProcessorNode<>(name, processorParameters, storeBuilder);
    builder.addGraphNode(this.graphNode, tableNode);
    return new KTableImpl<K, V, V>(name, keySerde, valueSerde, subTopologySourceNodes, queryableStoreName, processorSupplier, tableNode, builder);
}
Also used : GraphNode(org.apache.kafka.streams.kstream.internals.graph.GraphNode) ProcessorGraphNode(org.apache.kafka.streams.kstream.internals.graph.ProcessorGraphNode) TableProcessorNode(org.apache.kafka.streams.kstream.internals.graph.TableProcessorNode) TimestampedKeyValueStore(org.apache.kafka.streams.state.TimestampedKeyValueStore)

Aggregations

TimestampedKeyValueStore (org.apache.kafka.streams.state.TimestampedKeyValueStore)10 MaterializedInternal (org.apache.kafka.streams.kstream.internals.MaterializedInternal)5 TimestampedKeyValueStoreMaterializer (org.apache.kafka.streams.kstream.internals.TimestampedKeyValueStoreMaterializer)5 KeyValueStore (org.apache.kafka.streams.state.KeyValueStore)5 CachingKeyValueStore (org.apache.kafka.streams.state.internals.CachingKeyValueStore)5 InMemoryKeyValueStore (org.apache.kafka.streams.state.internals.InMemoryKeyValueStore)5 MeteredTimestampedKeyValueStore (org.apache.kafka.streams.state.internals.MeteredTimestampedKeyValueStore)5 Test (org.junit.Test)5 WrappedStateStore (org.apache.kafka.streams.state.internals.WrappedStateStore)4 GraphNode (org.apache.kafka.streams.kstream.internals.graph.GraphNode)3 ProcessorGraphNode (org.apache.kafka.streams.kstream.internals.graph.ProcessorGraphNode)3 TableProcessorNode (org.apache.kafka.streams.kstream.internals.graph.TableProcessorNode)3 Bytes (org.apache.kafka.common.utils.Bytes)2 StateStore (org.apache.kafka.streams.processor.StateStore)2 KeyValueBytesStoreSupplier (org.apache.kafka.streams.state.KeyValueBytesStoreSupplier)2 UnsupportedVersionException (org.apache.kafka.common.errors.UnsupportedVersionException)1 Sensor (org.apache.kafka.common.metrics.Sensor)1 Change (org.apache.kafka.streams.kstream.internals.Change)1 ContextualProcessor (org.apache.kafka.streams.processor.api.ContextualProcessor)1 ProcessorContext (org.apache.kafka.streams.processor.api.ProcessorContext)1