Search in sources :

Example 16 with WriteBatch

use of org.rocksdb.WriteBatch in project kafka by apache.

the class AbstractRocksDBSegmentedBytesStore method getWriteBatches.

// Visible for testing
Map<S, WriteBatch> getWriteBatches(final Collection<ConsumerRecord<byte[], byte[]>> records) {
    // advance stream time to the max timestamp in the batch
    for (final ConsumerRecord<byte[], byte[]> record : records) {
        final long timestamp = keySchema.segmentTimestamp(Bytes.wrap(record.key()));
        observedStreamTime = Math.max(observedStreamTime, timestamp);
    }
    final Map<S, WriteBatch> writeBatchMap = new HashMap<>();
    for (final ConsumerRecord<byte[], byte[]> record : records) {
        final long timestamp = keySchema.segmentTimestamp(Bytes.wrap(record.key()));
        final long segmentId = segments.segmentId(timestamp);
        final S segment = segments.getOrCreateSegmentIfLive(segmentId, context, observedStreamTime);
        if (segment != null) {
            ChangelogRecordDeserializationHelper.applyChecksAndUpdatePosition(record, consistencyEnabled, position);
            try {
                final WriteBatch batch = writeBatchMap.computeIfAbsent(segment, s -> new WriteBatch());
                segment.addToBatch(new KeyValue<>(record.key(), record.value()), batch);
            } catch (final RocksDBException e) {
                throw new ProcessorStateException("Error restoring batch to store " + this.name, e);
            }
        }
    }
    return writeBatchMap;
}
Also used : RocksDBException(org.rocksdb.RocksDBException) HashMap(java.util.HashMap) WriteBatch(org.rocksdb.WriteBatch) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException)

Example 17 with WriteBatch

use of org.rocksdb.WriteBatch in project kafka by apache.

the class AbstractRocksDBSegmentedBytesStoreTest method shouldCreateWriteBatches.

@Test
public void shouldCreateWriteBatches() {
    final String key = "a";
    final Collection<ConsumerRecord<byte[], byte[]>> records = new ArrayList<>();
    records.add(new ConsumerRecord<>("", 0, 0L, serializeKey(new Windowed<>(key, windows[0])).get(), serializeValue(50L)));
    records.add(new ConsumerRecord<>("", 0, 0L, serializeKey(new Windowed<>(key, windows[3])).get(), serializeValue(100L)));
    final Map<S, WriteBatch> writeBatchMap = bytesStore.getWriteBatches(records);
    assertEquals(2, writeBatchMap.size());
    for (final WriteBatch batch : writeBatchMap.values()) {
        assertEquals(1, batch.count());
    }
}
Also used : Windowed(org.apache.kafka.streams.kstream.Windowed) ArrayList(java.util.ArrayList) WriteBatch(org.rocksdb.WriteBatch) ConsumerRecord(org.apache.kafka.clients.consumer.ConsumerRecord) Test(org.junit.Test)

Aggregations

WriteBatch (org.rocksdb.WriteBatch)17 RocksDBException (org.rocksdb.RocksDBException)11 StateStoreRuntimeException (org.apache.bookkeeper.statelib.api.exceptions.StateStoreRuntimeException)5 ProcessorStateException (org.apache.kafka.streams.errors.ProcessorStateException)5 HashMap (java.util.HashMap)4 Map (java.util.Map)4 KV (org.apache.bookkeeper.common.kv.KV)4 WriteOptions (org.rocksdb.WriteOptions)4 ArrayList (java.util.ArrayList)2 MetricException (org.apache.storm.metricstore.MetricException)2 ColumnFamilyHandle (org.rocksdb.ColumnFamilyHandle)2 IOException (java.io.IOException)1 HashSet (java.util.HashSet)1 CompareOp (org.apache.bookkeeper.api.kv.op.CompareOp)1 CompareResult (org.apache.bookkeeper.api.kv.op.CompareResult)1 DeleteOp (org.apache.bookkeeper.api.kv.op.DeleteOp)1 IncrementOp (org.apache.bookkeeper.api.kv.op.IncrementOp)1 Op (org.apache.bookkeeper.api.kv.op.Op)1 PutOp (org.apache.bookkeeper.api.kv.op.PutOp)1 RangeOp (org.apache.bookkeeper.api.kv.op.RangeOp)1