Search in sources :

Example 1 with RocksDBException

use of org.rocksdb.RocksDBException in project flink by apache.

the class RocksDBKeyedStateBackend method getColumnFamily.

// ------------------------------------------------------------------------
//  State factories
// ------------------------------------------------------------------------
/**
	 * Creates a column family handle for use with a k/v state. When restoring from a snapshot
	 * we don't restore the individual k/v states, just the global RocksDB data base and the
	 * list of column families. When a k/v state is first requested we check here whether we
	 * already have a column family for that and return it or create a new one if it doesn't exist.
	 *
	 * <p>This also checks whether the {@link StateDescriptor} for a state matches the one
	 * that we checkpointed, i.e. is already in the map of column families.
	 */
@SuppressWarnings("rawtypes, unchecked")
protected <N, S> ColumnFamilyHandle getColumnFamily(StateDescriptor<?, S> descriptor, TypeSerializer<N> namespaceSerializer) throws IOException {
    Tuple2<ColumnFamilyHandle, RegisteredBackendStateMetaInfo<?, ?>> stateInfo = kvStateInformation.get(descriptor.getName());
    RegisteredBackendStateMetaInfo<N, S> newMetaInfo = new RegisteredBackendStateMetaInfo<>(descriptor.getType(), descriptor.getName(), namespaceSerializer, descriptor.getSerializer());
    if (stateInfo != null) {
        if (newMetaInfo.isCompatibleWith(stateInfo.f1)) {
            stateInfo.f1 = newMetaInfo;
            return stateInfo.f0;
        } else {
            throw new IOException("Trying to access state using wrong meta info, was " + stateInfo.f1 + " trying access with " + newMetaInfo);
        }
    }
    ColumnFamilyDescriptor columnDescriptor = new ColumnFamilyDescriptor(descriptor.getName().getBytes(ConfigConstants.DEFAULT_CHARSET), columnOptions);
    try {
        ColumnFamilyHandle columnFamily = db.createColumnFamily(columnDescriptor);
        Tuple2<ColumnFamilyHandle, RegisteredBackendStateMetaInfo<N, S>> tuple = new Tuple2<>(columnFamily, newMetaInfo);
        Map rawAccess = kvStateInformation;
        rawAccess.put(descriptor.getName(), tuple);
        return columnFamily;
    } catch (RocksDBException e) {
        throw new IOException("Error creating ColumnFamilyHandle.", e);
    }
}
Also used : RocksDBException(org.rocksdb.RocksDBException) Tuple2(org.apache.flink.api.java.tuple.Tuple2) RegisteredBackendStateMetaInfo(org.apache.flink.runtime.state.RegisteredBackendStateMetaInfo) IOException(java.io.IOException) ColumnFamilyDescriptor(org.rocksdb.ColumnFamilyDescriptor) Map(java.util.Map) HashMap(java.util.HashMap) ColumnFamilyHandle(org.rocksdb.ColumnFamilyHandle)

Example 2 with RocksDBException

use of org.rocksdb.RocksDBException in project flink by apache.

the class RocksDBListState method get.

@Override
public Iterable<V> get() {
    try {
        writeCurrentKeyWithGroupAndNamespace();
        byte[] key = keySerializationStream.toByteArray();
        byte[] valueBytes = backend.db.get(columnFamily, key);
        if (valueBytes == null) {
            return null;
        }
        ByteArrayInputStream bais = new ByteArrayInputStream(valueBytes);
        DataInputViewStreamWrapper in = new DataInputViewStreamWrapper(bais);
        List<V> result = new ArrayList<>();
        while (in.available() > 0) {
            result.add(valueSerializer.deserialize(in));
            if (in.available() > 0) {
                in.readByte();
            }
        }
        return result;
    } catch (IOException | RocksDBException e) {
        throw new RuntimeException("Error while retrieving data from RocksDB", e);
    }
}
Also used : RocksDBException(org.rocksdb.RocksDBException) ByteArrayInputStream(java.io.ByteArrayInputStream) ArrayList(java.util.ArrayList) IOException(java.io.IOException) DataInputViewStreamWrapper(org.apache.flink.core.memory.DataInputViewStreamWrapper)

Example 3 with RocksDBException

use of org.rocksdb.RocksDBException in project pravega by pravega.

the class RocksDBCache method get.

@Override
public byte[] get(Key key) {
    ensureInitializedAndNotClosed();
    Timer timer = new Timer();
    byte[] result;
    try {
        result = this.database.get().get(key.serialize());
    } catch (RocksDBException ex) {
        throw convert(ex, "get key '%s'", key);
    }
    RocksDBMetrics.get(timer.getElapsedMillis());
    return result;
}
Also used : RocksDBException(org.rocksdb.RocksDBException) Timer(io.pravega.common.Timer)

Example 4 with RocksDBException

use of org.rocksdb.RocksDBException in project pravega by pravega.

the class RocksDBCache method insert.

// endregion
// region Cache Implementation
@Override
public void insert(Key key, byte[] data) {
    ensureInitializedAndNotClosed();
    Timer timer = new Timer();
    try {
        this.database.get().put(this.writeOptions, key.serialize(), data);
    } catch (RocksDBException ex) {
        throw convert(ex, "insert key '%s'", key);
    }
    RocksDBMetrics.insert(timer.getElapsedMillis());
}
Also used : RocksDBException(org.rocksdb.RocksDBException) Timer(io.pravega.common.Timer)

Example 5 with RocksDBException

use of org.rocksdb.RocksDBException in project apache-kafka-on-k8s by banzaicloud.

the class RocksDBStore method toggleDbForBulkLoading.

private void toggleDbForBulkLoading(final boolean prepareForBulkload) {
    if (prepareForBulkload) {
        // if the store is not empty, we need to compact to get around the num.levels check
        // for bulk loading
        final String[] sstFileNames = dbDir.list(new FilenameFilter() {

            @Override
            public boolean accept(final File dir, final String name) {
                return name.matches(".*\\.sst");
            }
        });
        if (sstFileNames != null && sstFileNames.length > 0) {
            try {
                this.db.compactRange(true, 1, 0);
            } catch (final RocksDBException e) {
                throw new ProcessorStateException("Error while range compacting during restoring  store " + this.name, e);
            }
            // we need to re-open with the old num.levels again, this is a workaround
            // until https://github.com/facebook/rocksdb/pull/2740 is merged in rocksdb
            close();
            openDB(internalProcessorContext);
        }
    }
    close();
    this.prepareForBulkload = prepareForBulkload;
    openDB(internalProcessorContext);
}
Also used : FilenameFilter(java.io.FilenameFilter) RocksDBException(org.rocksdb.RocksDBException) File(java.io.File) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException)

Aggregations

RocksDBException (org.rocksdb.RocksDBException)66 IOException (java.io.IOException)17 ColumnFamilyHandle (org.rocksdb.ColumnFamilyHandle)17 ProcessorStateException (org.apache.kafka.streams.errors.ProcessorStateException)11 ColumnFamilyDescriptor (org.rocksdb.ColumnFamilyDescriptor)11 ArrayList (java.util.ArrayList)10 WriteBatch (org.rocksdb.WriteBatch)10 HashMap (java.util.HashMap)8 Map (java.util.Map)8 MetricException (org.apache.storm.metricstore.MetricException)8 WriteOptions (org.rocksdb.WriteOptions)7 Options (org.rocksdb.Options)6 File (java.io.File)5 DBOptions (org.rocksdb.DBOptions)5 FlushOptions (org.rocksdb.FlushOptions)5 RocksDB (org.rocksdb.RocksDB)5 DataInputViewStreamWrapper (org.apache.flink.core.memory.DataInputViewStreamWrapper)4 ColumnFamilyOptions (org.rocksdb.ColumnFamilyOptions)4 ReadOptions (org.rocksdb.ReadOptions)4 RocksIterator (org.rocksdb.RocksIterator)4