Search in sources :

Example 41 with RocksDBException

use of org.rocksdb.RocksDBException in project bookkeeper by apache.

the class RocksdbKVStore method readLastRevision.

private void readLastRevision() throws StateStoreException {
    byte[] revisionBytes;
    try {
        revisionBytes = db.get(metaCfHandle, LAST_REVISION);
    } catch (RocksDBException e) {
        throw new StateStoreException("Failed to read last revision from state store " + name(), e);
    }
    if (null == revisionBytes) {
        return;
    }
    long revision = Bytes.toLong(revisionBytes, 0);
    lastRevisionUpdater.set(this, revision);
}
Also used : StateStoreException(org.apache.bookkeeper.statelib.api.exceptions.StateStoreException) InvalidStateStoreException(org.apache.bookkeeper.statelib.api.exceptions.InvalidStateStoreException) RocksDBException(org.rocksdb.RocksDBException)

Example 42 with RocksDBException

use of org.rocksdb.RocksDBException in project bookkeeper by apache.

the class RocksdbCheckpointTask method checkpoint.

public String checkpoint(byte[] txid) throws StateStoreException {
    String checkpointId = UUID.randomUUID().toString();
    File tempDir = new File(checkpointDir, checkpointId);
    log.info("Create a local checkpoint of state store {} at {}", dbName, tempDir);
    try {
        try {
            checkpoint.createCheckpoint(tempDir.getAbsolutePath());
        } catch (RocksDBException e) {
            throw new StateStoreException("Failed to create a checkpoint at " + tempDir, e);
        }
        String remoteCheckpointPath = RocksUtils.getDestCheckpointPath(dbPrefix, checkpointId);
        if (!checkpointStore.fileExists(remoteCheckpointPath)) {
            checkpointStore.createDirectories(remoteCheckpointPath);
        }
        String sstsPath = RocksUtils.getDestSstsPath(dbPrefix);
        if (!checkpointStore.fileExists(sstsPath)) {
            checkpointStore.createDirectories(sstsPath);
        }
        // get the files to copy
        List<File> filesToCopy = getFilesToCopy(tempDir);
        // copy the files
        copyFilesToDest(checkpointId, filesToCopy);
        // finalize copy files
        finalizeCopyFiles(checkpointId, filesToCopy);
        // dump the file list to checkpoint file
        finalizeCheckpoint(checkpointId, tempDir, txid);
        // clean up the remote checkpoints
        if (removeRemoteCheckpointsAfterSuccessfulCheckpoint) {
            cleanupRemoteCheckpoints(tempDir, checkpointId);
        }
        return checkpointId;
    } catch (IOException ioe) {
        log.error("Failed to checkpoint db {} to dir {}", new Object[] { dbName, tempDir, ioe });
        throw new StateStoreException("Failed to checkpoint db " + dbName + " to dir " + tempDir, ioe);
    } finally {
        if (removeLocalCheckpointAfterSuccessfulCheckpoint && tempDir.exists()) {
            try {
                MoreFiles.deleteRecursively(Paths.get(tempDir.getAbsolutePath()), RecursiveDeleteOption.ALLOW_INSECURE);
            } catch (IOException ioe) {
                log.warn("Failed to remove temporary checkpoint dir {}", tempDir, ioe);
            }
        }
    }
}
Also used : StateStoreException(org.apache.bookkeeper.statelib.api.exceptions.StateStoreException) RocksDBException(org.rocksdb.RocksDBException) IOException(java.io.IOException) File(java.io.File)

Example 43 with RocksDBException

use of org.rocksdb.RocksDBException in project aion by aionnetwork.

the class RocksDBWrapper method putBatchInternal.

@Override
public void putBatchInternal(Map<byte[], byte[]> input) {
    // try-with-resources will automatically close the batch object
    try (WriteBatch batch = new WriteBatch()) {
        // add put and delete operations to batch
        for (Map.Entry<byte[], byte[]> e : input.entrySet()) {
            byte[] key = e.getKey();
            byte[] value = e.getValue();
            batch.put(key, value);
        }
        // bulk atomic update
        db.write(writeOptions, batch);
    } catch (RocksDBException e) {
        LOG.error("Unable to execute batch put/update operation on " + this.toString() + ".", e);
    }
}
Also used : RocksDBException(org.rocksdb.RocksDBException) WriteBatch(org.rocksdb.WriteBatch) Map(java.util.Map)

Example 44 with RocksDBException

use of org.rocksdb.RocksDBException in project apache-kafka-on-k8s by banzaicloud.

the class RocksDBStore method approximateNumEntries.

/**
 * Return an approximate count of key-value mappings in this store.
 *
 * <code>RocksDB</code> cannot return an exact entry count without doing a
 * full scan, so this method relies on the <code>rocksdb.estimate-num-keys</code>
 * property to get an approximate count. The returned size also includes
 * a count of dirty keys in the store's in-memory cache, which may lead to some
 * double-counting of entries and inflate the estimate.
 *
 * @return an approximate count of key-value mappings in the store.
 */
@Override
public long approximateNumEntries() {
    validateStoreOpen();
    final long value;
    try {
        value = this.db.getLongProperty("rocksdb.estimate-num-keys");
    } catch (final RocksDBException e) {
        throw new ProcessorStateException("Error fetching property from store " + this.name, e);
    }
    if (isOverflowing(value)) {
        return Long.MAX_VALUE;
    }
    return value;
}
Also used : RocksDBException(org.rocksdb.RocksDBException) ProcessorStateException(org.apache.kafka.streams.errors.ProcessorStateException)

Example 45 with RocksDBException

use of org.rocksdb.RocksDBException in project storm by apache.

the class RocksDbStore method prepare.

/**
 * Create metric store instance using the configurations provided via the config map.
 *
 * @param config Storm config map
 * @param metricsRegistry The Nimbus daemon metrics registry
 * @throws MetricException on preparation error
 */
@Override
public void prepare(Map<String, Object> config, StormMetricsRegistry metricsRegistry) throws MetricException {
    validateConfig(config);
    this.failureMeter = metricsRegistry.registerMeter("RocksDB:metric-failures");
    RocksDB.loadLibrary();
    boolean createIfMissing = ObjectReader.getBoolean(config.get(DaemonConfig.STORM_ROCKSDB_CREATE_IF_MISSING), false);
    try (Options options = new Options().setCreateIfMissing(createIfMissing)) {
        // use the hash index for prefix searches
        BlockBasedTableConfig tfc = new BlockBasedTableConfig();
        tfc.setIndexType(IndexType.kHashSearch);
        options.setTableFormatConfig(tfc);
        options.useCappedPrefixExtractor(RocksDbKey.KEY_SIZE);
        String path = getRocksDbAbsoluteDir(config);
        LOG.info("Opening RocksDB from {}", path);
        db = RocksDB.open(options, path);
    } catch (RocksDBException e) {
        String message = "Error opening RockDB database";
        LOG.error(message, e);
        throw new MetricException(message, e);
    }
    // create thread to delete old metrics and metadata
    Integer retentionHours = Integer.parseInt(config.get(DaemonConfig.STORM_ROCKSDB_METRIC_RETENTION_HOURS).toString());
    Integer deletionPeriod = 0;
    if (config.containsKey(DaemonConfig.STORM_ROCKSDB_METRIC_DELETION_PERIOD_HOURS)) {
        deletionPeriod = Integer.parseInt(config.get(DaemonConfig.STORM_ROCKSDB_METRIC_DELETION_PERIOD_HOURS).toString());
    }
    metricsCleaner = new MetricsCleaner(this, retentionHours, deletionPeriod, failureMeter, metricsRegistry);
    // create thread to process insertion of all metrics
    metricsWriter = new RocksDbMetricsWriter(this, this.queue, this.failureMeter);
    int cacheCapacity = Integer.parseInt(config.get(DaemonConfig.STORM_ROCKSDB_METADATA_STRING_CACHE_CAPACITY).toString());
    StringMetadataCache.init(metricsWriter, cacheCapacity);
    readOnlyStringMetadataCache = StringMetadataCache.getReadOnlyStringMetadataCache();
    // init the writer once the cache is setup
    metricsWriter.init();
    // start threads after metadata cache created
    Thread thread = new Thread(metricsCleaner, "RocksDbMetricsCleaner");
    thread.setDaemon(true);
    thread.start();
    thread = new Thread(metricsWriter, "RocksDbMetricsWriter");
    thread.setDaemon(true);
    thread.start();
}
Also used : ReadOptions(org.rocksdb.ReadOptions) FilterOptions(org.apache.storm.metricstore.FilterOptions) WriteOptions(org.rocksdb.WriteOptions) Options(org.rocksdb.Options) RocksDBException(org.rocksdb.RocksDBException) BlockBasedTableConfig(org.rocksdb.BlockBasedTableConfig) MetricException(org.apache.storm.metricstore.MetricException)

Aggregations

RocksDBException (org.rocksdb.RocksDBException)66 IOException (java.io.IOException)17 ColumnFamilyHandle (org.rocksdb.ColumnFamilyHandle)17 ProcessorStateException (org.apache.kafka.streams.errors.ProcessorStateException)11 ColumnFamilyDescriptor (org.rocksdb.ColumnFamilyDescriptor)11 ArrayList (java.util.ArrayList)10 WriteBatch (org.rocksdb.WriteBatch)10 HashMap (java.util.HashMap)8 Map (java.util.Map)8 MetricException (org.apache.storm.metricstore.MetricException)8 WriteOptions (org.rocksdb.WriteOptions)7 Options (org.rocksdb.Options)6 File (java.io.File)5 DBOptions (org.rocksdb.DBOptions)5 FlushOptions (org.rocksdb.FlushOptions)5 RocksDB (org.rocksdb.RocksDB)5 DataInputViewStreamWrapper (org.apache.flink.core.memory.DataInputViewStreamWrapper)4 ColumnFamilyOptions (org.rocksdb.ColumnFamilyOptions)4 ReadOptions (org.rocksdb.ReadOptions)4 RocksIterator (org.rocksdb.RocksIterator)4