Search in sources :

Example 1 with BlockBasedTableConfig

use of org.rocksdb.BlockBasedTableConfig in project kafka by apache.

the class RocksDBStore method openDB.

@SuppressWarnings("unchecked")
public void openDB(ProcessorContext context) {
    // initialize the default rocksdb options
    final BlockBasedTableConfig tableConfig = new BlockBasedTableConfig();
    tableConfig.setBlockCacheSize(BLOCK_CACHE_SIZE);
    tableConfig.setBlockSize(BLOCK_SIZE);
    options = new Options();
    options.setTableFormatConfig(tableConfig);
    options.setWriteBufferSize(WRITE_BUFFER_SIZE);
    options.setCompressionType(COMPRESSION_TYPE);
    options.setCompactionStyle(COMPACTION_STYLE);
    options.setMaxWriteBufferNumber(MAX_WRITE_BUFFERS);
    options.setCreateIfMissing(true);
    options.setErrorIfExists(false);
    options.setInfoLogLevel(InfoLogLevel.ERROR_LEVEL);
    // this is the recommended way to increase parallelism in RocksDb
    // note that the current implementation increases the number of compaction threads
    // but not flush threads.
    options.setIncreaseParallelism(Runtime.getRuntime().availableProcessors());
    wOptions = new WriteOptions();
    wOptions.setDisableWAL(true);
    fOptions = new FlushOptions();
    fOptions.setWaitForFlush(true);
    final Map<String, Object> configs = context.appConfigs();
    final Class<RocksDBConfigSetter> configSetterClass = (Class<RocksDBConfigSetter>) configs.get(StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG);
    if (configSetterClass != null) {
        final RocksDBConfigSetter configSetter = Utils.newInstance(configSetterClass);
        configSetter.setConfig(name, options, configs);
    }
    // we need to construct the serde while opening DB since
    // it is also triggered by windowed DB segments without initialization
    this.serdes = new StateSerdes<>(name, keySerde == null ? (Serde<K>) context.keySerde() : keySerde, valueSerde == null ? (Serde<V>) context.valueSerde() : valueSerde);
    this.dbDir = new File(new File(context.stateDir(), parentDir), this.name);
    try {
        this.db = openDB(this.dbDir, this.options, TTL_SECONDS);
    } catch (IOException e) {
        throw new StreamsException(e);
    }
}
Also used : FlushOptions(org.rocksdb.FlushOptions) WriteOptions(org.rocksdb.WriteOptions) Options(org.rocksdb.Options) StreamsException(org.apache.kafka.streams.errors.StreamsException) BlockBasedTableConfig(org.rocksdb.BlockBasedTableConfig) IOException(java.io.IOException) FlushOptions(org.rocksdb.FlushOptions) WriteOptions(org.rocksdb.WriteOptions) RocksDBConfigSetter(org.apache.kafka.streams.state.RocksDBConfigSetter) File(java.io.File)

Example 2 with BlockBasedTableConfig

use of org.rocksdb.BlockBasedTableConfig in project samza by apache.

the class RocksDbOptionsHelper method options.

public static Options options(Config storeConfig, SamzaContainerContext containerContext) {
    Options options = new Options();
    Long writeBufSize = storeConfig.getLong("container.write.buffer.size.bytes", 32 * 1024 * 1024);
    // Cache size and write buffer size are specified on a per-container basis.
    int numTasks = containerContext.taskNames.size();
    options.setWriteBufferSize((int) (writeBufSize / numTasks));
    CompressionType compressionType = CompressionType.SNAPPY_COMPRESSION;
    String compressionInConfig = storeConfig.get(ROCKSDB_COMPRESSION, "snappy");
    switch(compressionInConfig) {
        case "snappy":
            compressionType = CompressionType.SNAPPY_COMPRESSION;
            break;
        case "bzip2":
            compressionType = CompressionType.BZLIB2_COMPRESSION;
            break;
        case "zlib":
            compressionType = CompressionType.ZLIB_COMPRESSION;
            break;
        case "lz4":
            compressionType = CompressionType.LZ4_COMPRESSION;
            break;
        case "lz4hc":
            compressionType = CompressionType.LZ4HC_COMPRESSION;
            break;
        case "none":
            compressionType = CompressionType.NO_COMPRESSION;
            break;
        default:
            log.warn("Unknown rocksdb.compression codec " + compressionInConfig + ", overwriting to " + compressionType.name());
    }
    options.setCompressionType(compressionType);
    Long cacheSize = storeConfig.getLong("container.cache.size.bytes", 100 * 1024 * 1024L);
    Long cacheSizePerContainer = cacheSize / numTasks;
    int blockSize = storeConfig.getInt(ROCKSDB_BLOCK_SIZE_BYTES, 4096);
    BlockBasedTableConfig tableOptions = new BlockBasedTableConfig();
    tableOptions.setBlockCacheSize(cacheSizePerContainer).setBlockSize(blockSize);
    options.setTableFormatConfig(tableOptions);
    CompactionStyle compactionStyle = CompactionStyle.UNIVERSAL;
    String compactionStyleInConfig = storeConfig.get(ROCKSDB_COMPACTION_STYLE, "universal");
    switch(compactionStyleInConfig) {
        case "universal":
            compactionStyle = CompactionStyle.UNIVERSAL;
            break;
        case "fifo":
            compactionStyle = CompactionStyle.FIFO;
            break;
        case "level":
            compactionStyle = CompactionStyle.LEVEL;
            break;
        default:
            log.warn("Unknown rocksdb.compaction.style " + compactionStyleInConfig + ", overwriting to " + compactionStyle.name());
    }
    options.setCompactionStyle(compactionStyle);
    options.setMaxWriteBufferNumber(storeConfig.getInt(ROCKSDB_NUM_WRITE_BUFFERS, 3));
    options.setCreateIfMissing(true);
    options.setErrorIfExists(false);
    options.setMaxLogFileSize(storeConfig.getLong(ROCKSDB_MAX_LOG_FILE_SIZE_BYTES, 64 * 1024 * 1024L));
    options.setKeepLogFileNum(storeConfig.getLong(ROCKSDB_KEEP_LOG_FILE_NUM, 2));
    return options;
}
Also used : Options(org.rocksdb.Options) BlockBasedTableConfig(org.rocksdb.BlockBasedTableConfig) CompactionStyle(org.rocksdb.CompactionStyle) CompressionType(org.rocksdb.CompressionType)

Example 3 with BlockBasedTableConfig

use of org.rocksdb.BlockBasedTableConfig in project apache-kafka-on-k8s by banzaicloud.

the class RocksDBStore method openDB.

@SuppressWarnings("unchecked")
public void openDB(final ProcessorContext context) {
    // initialize the default rocksdb options
    final BlockBasedTableConfig tableConfig = new BlockBasedTableConfig();
    tableConfig.setBlockCacheSize(BLOCK_CACHE_SIZE);
    tableConfig.setBlockSize(BLOCK_SIZE);
    options = new Options();
    options.setTableFormatConfig(tableConfig);
    options.setWriteBufferSize(WRITE_BUFFER_SIZE);
    options.setCompressionType(COMPRESSION_TYPE);
    options.setCompactionStyle(COMPACTION_STYLE);
    options.setMaxWriteBufferNumber(MAX_WRITE_BUFFERS);
    options.setCreateIfMissing(true);
    options.setErrorIfExists(false);
    options.setInfoLogLevel(InfoLogLevel.ERROR_LEVEL);
    // this is the recommended way to increase parallelism in RocksDb
    // note that the current implementation of setIncreaseParallelism affects the number
    // of compaction threads but not flush threads (the latter remains one). Also
    // the parallelism value needs to be at least two because of the code in
    // https://github.com/facebook/rocksdb/blob/62ad0a9b19f0be4cefa70b6b32876e764b7f3c11/util/options.cc#L580
    // subtracts one from the value passed to determine the number of compaction threads
    // (this could be a bug in the RocksDB code and their devs have been contacted).
    options.setIncreaseParallelism(Math.max(Runtime.getRuntime().availableProcessors(), 2));
    if (prepareForBulkload) {
        options.prepareForBulkLoad();
    }
    wOptions = new WriteOptions();
    wOptions.setDisableWAL(true);
    fOptions = new FlushOptions();
    fOptions.setWaitForFlush(true);
    final Map<String, Object> configs = context.appConfigs();
    final Class<RocksDBConfigSetter> configSetterClass = (Class<RocksDBConfigSetter>) configs.get(StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG);
    if (configSetterClass != null) {
        final RocksDBConfigSetter configSetter = Utils.newInstance(configSetterClass);
        configSetter.setConfig(name, options, configs);
    }
    this.dbDir = new File(new File(context.stateDir(), parentDir), this.name);
    try {
        this.db = openDB(this.dbDir, this.options, TTL_SECONDS);
    } catch (IOException e) {
        throw new ProcessorStateException(e);
    }
    open = true;
}
Also used : FlushOptions(org.rocksdb.FlushOptions) WriteOptions(org.rocksdb.WriteOptions) Options(org.rocksdb.Options) BlockBasedTableConfig(org.rocksdb.BlockBasedTableConfig) IOException(java.io.IOException) FlushOptions(org.rocksdb.FlushOptions) WriteOptions(org.rocksdb.WriteOptions) RocksDBConfigSetter(org.apache.kafka.streams.state.RocksDBConfigSetter) File(java.io.File) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException)

Example 4 with BlockBasedTableConfig

use of org.rocksdb.BlockBasedTableConfig in project flink by apache.

the class RocksDBResourceContainerTest method testGetColumnFamilyOptionsWithPartitionedIndex.

@Test
public void testGetColumnFamilyOptionsWithPartitionedIndex() throws Exception {
    LRUCache cache = new LRUCache(1024L);
    WriteBufferManager wbm = new WriteBufferManager(1024L, cache);
    RocksDBSharedResources sharedResources = new RocksDBSharedResources(cache, wbm, 1024L, true);
    final ThrowingRunnable<Exception> disposer = sharedResources::close;
    OpaqueMemoryResource<RocksDBSharedResources> opaqueResource = new OpaqueMemoryResource<>(sharedResources, 1024L, disposer);
    BloomFilter blockBasedFilter = new BloomFilter();
    RocksDBOptionsFactory blockBasedBloomFilterOptionFactory = new RocksDBOptionsFactory() {

        @Override
        public DBOptions createDBOptions(DBOptions currentOptions, Collection<AutoCloseable> handlesToClose) {
            return currentOptions;
        }

        @Override
        public ColumnFamilyOptions createColumnOptions(ColumnFamilyOptions currentOptions, Collection<AutoCloseable> handlesToClose) {
            TableFormatConfig tableFormatConfig = currentOptions.tableFormatConfig();
            BlockBasedTableConfig blockBasedTableConfig = tableFormatConfig == null ? new BlockBasedTableConfig() : (BlockBasedTableConfig) tableFormatConfig;
            blockBasedTableConfig.setFilter(blockBasedFilter);
            handlesToClose.add(blockBasedFilter);
            currentOptions.setTableFormatConfig(blockBasedTableConfig);
            return currentOptions;
        }
    };
    try (RocksDBResourceContainer container = new RocksDBResourceContainer(PredefinedOptions.DEFAULT, blockBasedBloomFilterOptionFactory, opaqueResource)) {
        ColumnFamilyOptions columnOptions = container.getColumnOptions();
        BlockBasedTableConfig actual = (BlockBasedTableConfig) columnOptions.tableFormatConfig();
        assertThat(actual.indexType(), is(IndexType.kTwoLevelIndexSearch));
        assertThat(actual.partitionFilters(), is(true));
        assertThat(actual.pinTopLevelIndexAndFilter(), is(true));
        assertThat(actual.filterPolicy(), not(blockBasedFilter));
    }
    assertFalse("Block based filter is left unclosed.", blockBasedFilter.isOwningHandle());
}
Also used : TableFormatConfig(org.rocksdb.TableFormatConfig) BlockBasedTableConfig(org.rocksdb.BlockBasedTableConfig) IOException(java.io.IOException) BloomFilter(org.rocksdb.BloomFilter) OpaqueMemoryResource(org.apache.flink.runtime.memory.OpaqueMemoryResource) ColumnFamilyOptions(org.rocksdb.ColumnFamilyOptions) LRUCache(org.rocksdb.LRUCache) WriteBufferManager(org.rocksdb.WriteBufferManager) Collection(java.util.Collection) DBOptions(org.rocksdb.DBOptions) Test(org.junit.Test)

Example 5 with BlockBasedTableConfig

use of org.rocksdb.BlockBasedTableConfig in project kafka by apache.

the class BlockBasedTableConfigWithAccessibleCacheTest method shouldSetBlockCacheAndMakeItAccessible.

@Test
public void shouldSetBlockCacheAndMakeItAccessible() {
    final BlockBasedTableConfigWithAccessibleCache configWithAccessibleCache = new BlockBasedTableConfigWithAccessibleCache();
    final Cache blockCache = new LRUCache(1024);
    final BlockBasedTableConfig updatedConfig = configWithAccessibleCache.setBlockCache(blockCache);
    assertThat(updatedConfig, sameInstance(configWithAccessibleCache));
    assertThat(configWithAccessibleCache.blockCache(), sameInstance(blockCache));
}
Also used : LRUCache(org.rocksdb.LRUCache) BlockBasedTableConfig(org.rocksdb.BlockBasedTableConfig) Cache(org.rocksdb.Cache) LRUCache(org.rocksdb.LRUCache) Test(org.junit.Test)

Aggregations

BlockBasedTableConfig (org.rocksdb.BlockBasedTableConfig)13 Options (org.rocksdb.Options)5 TableFormatConfig (org.rocksdb.TableFormatConfig)5 Cache (org.rocksdb.Cache)4 LRUCache (org.rocksdb.LRUCache)4 WriteOptions (org.rocksdb.WriteOptions)4 IOException (java.io.IOException)3 BloomFilter (org.rocksdb.BloomFilter)3 ColumnFamilyOptions (org.rocksdb.ColumnFamilyOptions)3 FlushOptions (org.rocksdb.FlushOptions)3 File (java.io.File)2 ProcessorStateException (org.apache.kafka.streams.errors.ProcessorStateException)2 RocksDBConfigSetter (org.apache.kafka.streams.state.RocksDBConfigSetter)2 Test (org.junit.Test)2 CompressionType (org.rocksdb.CompressionType)2 DBOptions (org.rocksdb.DBOptions)2 PlainTableConfig (org.rocksdb.PlainTableConfig)2 Field (java.lang.reflect.Field)1 Collection (java.util.Collection)1 List (java.util.List)1