Search in sources :

Example 1 with RocksIterator

use of org.rocksdb.RocksIterator in project kafka by apache.

the class RocksDBStore method all.

@Override
public synchronized KeyValueIterator<K, V> all() {
    validateStoreOpen();
    // query rocksdb
    RocksIterator innerIter = db.newIterator();
    innerIter.seekToFirst();
    final RocksDbIterator rocksDbIterator = new RocksDbIterator(name, innerIter, serdes);
    openIterators.add(rocksDbIterator);
    return rocksDbIterator;
}
Also used : RocksIterator(org.rocksdb.RocksIterator)

Example 2 with RocksIterator

use of org.rocksdb.RocksIterator in project flink by apache.

the class RocksDBStateBackendTest method setupRocksKeyedStateBackend.

public void setupRocksKeyedStateBackend() throws Exception {
    blocker = new OneShotLatch();
    waiter = new OneShotLatch();
    testStreamFactory = new BlockerCheckpointStreamFactory(1024 * 1024);
    testStreamFactory.setBlockerLatch(blocker);
    testStreamFactory.setWaiterLatch(waiter);
    testStreamFactory.setAfterNumberInvocations(100);
    RocksDBStateBackend backend = getStateBackend();
    Environment env = new DummyEnvironment("TestTask", 1, 0);
    keyedStateBackend = (RocksDBKeyedStateBackend<Integer>) backend.createKeyedStateBackend(env, new JobID(), "Test", IntSerializer.INSTANCE, 2, new KeyGroupRange(0, 1), mock(TaskKvStateRegistry.class));
    testState1 = keyedStateBackend.getPartitionedState(VoidNamespace.INSTANCE, VoidNamespaceSerializer.INSTANCE, new ValueStateDescriptor<>("TestState-1", Integer.class, 0));
    testState2 = keyedStateBackend.getPartitionedState(VoidNamespace.INSTANCE, VoidNamespaceSerializer.INSTANCE, new ValueStateDescriptor<>("TestState-2", String.class, ""));
    allCreatedCloseables = new ArrayList<>();
    keyedStateBackend.db = spy(keyedStateBackend.db);
    doAnswer(new Answer<Object>() {

        @Override
        public Object answer(InvocationOnMock invocationOnMock) throws Throwable {
            RocksIterator rocksIterator = spy((RocksIterator) invocationOnMock.callRealMethod());
            allCreatedCloseables.add(rocksIterator);
            return rocksIterator;
        }
    }).when(keyedStateBackend.db).newIterator(any(ColumnFamilyHandle.class), any(ReadOptions.class));
    doAnswer(new Answer<Object>() {

        @Override
        public Object answer(InvocationOnMock invocationOnMock) throws Throwable {
            Snapshot snapshot = spy((Snapshot) invocationOnMock.callRealMethod());
            allCreatedCloseables.add(snapshot);
            return snapshot;
        }
    }).when(keyedStateBackend.db).getSnapshot();
    doAnswer(new Answer<Object>() {

        @Override
        public Object answer(InvocationOnMock invocationOnMock) throws Throwable {
            ColumnFamilyHandle snapshot = spy((ColumnFamilyHandle) invocationOnMock.callRealMethod());
            allCreatedCloseables.add(snapshot);
            return snapshot;
        }
    }).when(keyedStateBackend.db).createColumnFamily(any(ColumnFamilyDescriptor.class));
    for (int i = 0; i < 100; ++i) {
        keyedStateBackend.setCurrentKey(i);
        testState1.update(4200 + i);
        testState2.update("S-" + (4200 + i));
    }
}
Also used : KeyGroupRange(org.apache.flink.runtime.state.KeyGroupRange) TaskKvStateRegistry(org.apache.flink.runtime.query.TaskKvStateRegistry) DummyEnvironment(org.apache.flink.runtime.operators.testutils.DummyEnvironment) RocksIterator(org.rocksdb.RocksIterator) ColumnFamilyDescriptor(org.rocksdb.ColumnFamilyDescriptor) ColumnFamilyHandle(org.rocksdb.ColumnFamilyHandle) ValueStateDescriptor(org.apache.flink.api.common.state.ValueStateDescriptor) Snapshot(org.rocksdb.Snapshot) ReadOptions(org.rocksdb.ReadOptions) InvocationOnMock(org.mockito.invocation.InvocationOnMock) OneShotLatch(org.apache.flink.core.testutils.OneShotLatch) BlockerCheckpointStreamFactory(org.apache.flink.runtime.util.BlockerCheckpointStreamFactory) DummyEnvironment(org.apache.flink.runtime.operators.testutils.DummyEnvironment) Environment(org.apache.flink.runtime.execution.Environment) RocksObject(org.rocksdb.RocksObject) JobID(org.apache.flink.api.common.JobID)

Example 3 with RocksIterator

use of org.rocksdb.RocksIterator in project flink by apache.

the class ListViaRangeSpeedMiniBenchmark method main.

public static void main(String[] args) throws Exception {
    final File rocksDir = new File("/tmp/rdb");
    FileUtils.deleteDirectory(rocksDir);
    final Options options = new Options().setCompactionStyle(CompactionStyle.LEVEL).setLevelCompactionDynamicLevelBytes(true).setIncreaseParallelism(4).setUseFsync(false).setMaxOpenFiles(-1).setDisableDataSync(true).setCreateIfMissing(true).setMergeOperator(new StringAppendOperator());
    final WriteOptions write_options = new WriteOptions().setSync(false).setDisableWAL(true);
    final RocksDB rocksDB = RocksDB.open(options, rocksDir.getAbsolutePath());
    final String key = "key";
    final String value = "abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ7890654321";
    final byte[] keyBytes = key.getBytes(StandardCharsets.UTF_8);
    final byte[] valueBytes = value.getBytes(StandardCharsets.UTF_8);
    final byte[] keyTemplate = Arrays.copyOf(keyBytes, keyBytes.length + 4);
    final Unsafe unsafe = MemoryUtils.UNSAFE;
    final long offset = unsafe.arrayBaseOffset(byte[].class) + keyTemplate.length - 4;
    final int num = 50000;
    System.out.println("begin insert");
    final long beginInsert = System.nanoTime();
    for (int i = 0; i < num; i++) {
        unsafe.putInt(keyTemplate, offset, i);
        rocksDB.put(write_options, keyTemplate, valueBytes);
    }
    final long endInsert = System.nanoTime();
    System.out.println("end insert - duration: " + ((endInsert - beginInsert) / 1_000_000) + " ms");
    final byte[] resultHolder = new byte[num * valueBytes.length];
    final long beginGet = System.nanoTime();
    final RocksIterator iterator = rocksDB.newIterator();
    int pos = 0;
    // seek to start
    unsafe.putInt(keyTemplate, offset, 0);
    iterator.seek(keyTemplate);
    // mark end
    unsafe.putInt(keyTemplate, offset, -1);
    // iterate
    while (iterator.isValid()) {
        byte[] currKey = iterator.key();
        if (samePrefix(keyBytes, currKey)) {
            byte[] currValue = iterator.value();
            System.arraycopy(currValue, 0, resultHolder, pos, currValue.length);
            pos += currValue.length;
            iterator.next();
        } else {
            break;
        }
    }
    final long endGet = System.nanoTime();
    System.out.println("end get - duration: " + ((endGet - beginGet) / 1_000_000) + " ms");
}
Also used : Options(org.rocksdb.Options) WriteOptions(org.rocksdb.WriteOptions) WriteOptions(org.rocksdb.WriteOptions) RocksDB(org.rocksdb.RocksDB) StringAppendOperator(org.rocksdb.StringAppendOperator) Unsafe(sun.misc.Unsafe) RocksIterator(org.rocksdb.RocksIterator) File(java.io.File)

Aggregations

RocksIterator (org.rocksdb.RocksIterator)3 File (java.io.File)1 JobID (org.apache.flink.api.common.JobID)1 ValueStateDescriptor (org.apache.flink.api.common.state.ValueStateDescriptor)1 OneShotLatch (org.apache.flink.core.testutils.OneShotLatch)1 Environment (org.apache.flink.runtime.execution.Environment)1 DummyEnvironment (org.apache.flink.runtime.operators.testutils.DummyEnvironment)1 TaskKvStateRegistry (org.apache.flink.runtime.query.TaskKvStateRegistry)1 KeyGroupRange (org.apache.flink.runtime.state.KeyGroupRange)1 BlockerCheckpointStreamFactory (org.apache.flink.runtime.util.BlockerCheckpointStreamFactory)1 InvocationOnMock (org.mockito.invocation.InvocationOnMock)1 ColumnFamilyDescriptor (org.rocksdb.ColumnFamilyDescriptor)1 ColumnFamilyHandle (org.rocksdb.ColumnFamilyHandle)1 Options (org.rocksdb.Options)1 ReadOptions (org.rocksdb.ReadOptions)1 RocksDB (org.rocksdb.RocksDB)1 RocksObject (org.rocksdb.RocksObject)1 Snapshot (org.rocksdb.Snapshot)1 StringAppendOperator (org.rocksdb.StringAppendOperator)1 WriteOptions (org.rocksdb.WriteOptions)1