Search in sources :

Example 31 with RocksDBException

use of org.rocksdb.RocksDBException in project jstorm by alibaba.

the class RocksDbHdfsState method putBatch.

@Override
public void putBatch(Map<K, V> batch) {
    try {
        WriteBatch writeBatch = new WriteBatch();
        for (Map.Entry<K, V> entry : batch.entrySet()) {
            writeBatch.put(serializer.serialize(entry.getKey()), serializer.serialize(entry.getValue()));
        }
        rocksDb.write(new WriteOptions(), writeBatch);
    } catch (RocksDBException e) {
        LOG.error("Failed to put batch={}", batch);
        throw new RuntimeException(e.getMessage());
    }
}
Also used : WriteOptions(org.rocksdb.WriteOptions) RocksDBException(org.rocksdb.RocksDBException) WriteBatch(org.rocksdb.WriteBatch) HashMap(java.util.HashMap) Map(java.util.Map)

Example 32 with RocksDBException

use of org.rocksdb.RocksDBException in project jstorm by alibaba.

the class WindowedRocksDbHdfsState method getBatch.

@Override
public Map<K, V> getBatch(TimeWindow window) {
    Map<K, V> batch = new HashMap<K, V>();
    try {
        ColumnFamilyHandle handler = getColumnFamilyHandle(window);
        RocksIterator itr = rocksDb.newIterator(handler);
        itr.seekToFirst();
        while (itr.isValid()) {
            byte[] rawKey = itr.key();
            byte[] rawValue = itr.value();
            V value = rawValue != null ? (V) serializer.deserialize(rawValue) : null;
            batch.put((K) serializer.deserialize(rawKey), value);
            itr.next();
        }
        return batch;
    } catch (RocksDBException e) {
        LOG.error("Failed to get batch for window={}", window);
        throw new RuntimeException(e.getMessage());
    }
}
Also used : RocksDBException(org.rocksdb.RocksDBException) HashMap(java.util.HashMap) RocksIterator(org.rocksdb.RocksIterator) ColumnFamilyHandle(org.rocksdb.ColumnFamilyHandle)

Example 33 with RocksDBException

use of org.rocksdb.RocksDBException in project jstorm by alibaba.

the class WindowedRocksDbHdfsState method initRocksDb.

@Override
protected void initRocksDb() {
    windowToCFHandler = new HashMap<>();
    RocksDbOptionsFactory optionFactory = new RocksDbOptionsFactory.Defaults();
    Options options = optionFactory.createOptions(null);
    DBOptions dbOptions = optionFactory.createDbOptions(null);
    ColumnFamilyOptions cfOptions = optionFactory.createColumnFamilyOptions(null);
    String optionsFactoryClass = (String) conf.get(ConfigExtension.ROCKSDB_OPTIONS_FACTORY_CLASS);
    if (optionsFactoryClass != null) {
        RocksDbOptionsFactory udfOptionFactory = (RocksDbOptionsFactory) Utils.newInstance(optionsFactoryClass);
        options = udfOptionFactory.createOptions(options);
        dbOptions = udfOptionFactory.createDbOptions(dbOptions);
        cfOptions = udfOptionFactory.createColumnFamilyOptions(cfOptions);
    }
    try {
        ttlTimeSec = ConfigExtension.getStateTtlTime(conf);
        List<Integer> ttlValues = new ArrayList<>();
        List<byte[]> families = RocksDB.listColumnFamilies(options, rocksDbDir);
        List<ColumnFamilyHandle> columnFamilyHandles = new ArrayList<>();
        List<ColumnFamilyDescriptor> columnFamilyDescriptors = new ArrayList<>();
        if (families != null) {
            for (byte[] bytes : families) {
                columnFamilyDescriptors.add(new ColumnFamilyDescriptor(bytes, cfOptions));
                LOG.debug("Load colum family of {}", new String(bytes));
                if (ttlTimeSec > 0)
                    ttlValues.add(ttlTimeSec);
            }
        }
        if (columnFamilyDescriptors.size() > 0) {
            if (ttlTimeSec > 0)
                rocksDb = TtlDB.open(dbOptions, rocksDbDir, columnFamilyDescriptors, columnFamilyHandles, ttlValues, false);
            else
                rocksDb = RocksDB.open(dbOptions, rocksDbDir, columnFamilyDescriptors, columnFamilyHandles);
            int n = Math.min(columnFamilyDescriptors.size(), columnFamilyHandles.size());
            LOG.info("Try to load RocksDB with column family, desc_num={}, handler_num={}", columnFamilyDescriptors.size(), columnFamilyHandles.size());
            // skip default column
            for (int i = 1; i < n; i++) {
                windowToCFHandler.put((TimeWindow) serializer.deserialize(columnFamilyDescriptors.get(i).columnFamilyName()), columnFamilyHandles.get(i));
            }
        } else {
            rocksDb = RocksDB.open(options, rocksDbDir);
        }
        rocksDb.compactRange();
        LOG.info("Finish the initialization of RocksDB");
    } catch (RocksDBException e) {
        LOG.error("Failed to open rocksdb located at " + rocksDbDir, e);
        throw new RuntimeException(e.getMessage());
    }
    lastCheckpointFiles = new HashSet<String>();
    lastCleanTime = System.currentTimeMillis();
    lastSuccessBatchId = -1;
}
Also used : ColumnFamilyOptions(org.rocksdb.ColumnFamilyOptions) DBOptions(org.rocksdb.DBOptions) WriteOptions(org.rocksdb.WriteOptions) Options(org.rocksdb.Options) RocksDBException(org.rocksdb.RocksDBException) ArrayList(java.util.ArrayList) ColumnFamilyDescriptor(org.rocksdb.ColumnFamilyDescriptor) ColumnFamilyHandle(org.rocksdb.ColumnFamilyHandle) ColumnFamilyOptions(org.rocksdb.ColumnFamilyOptions) RocksDbOptionsFactory(com.alibaba.jstorm.cache.rocksdb.RocksDbOptionsFactory) DBOptions(org.rocksdb.DBOptions)

Example 34 with RocksDBException

use of org.rocksdb.RocksDBException in project jstorm by alibaba.

the class WindowedRocksDbHdfsState method putBatch.

@Override
public void putBatch(TimeWindow window, Map<K, V> batch) {
    try {
        ColumnFamilyHandle handler = getColumnFamilyHandle(window);
        WriteBatch writeBatch = new WriteBatch();
        for (Map.Entry<K, V> entry : batch.entrySet()) {
            writeBatch.put(handler, serializer.serialize(entry.getKey()), serializer.serialize(entry.getValue()));
        }
        rocksDb.write(new WriteOptions(), writeBatch);
    } catch (RocksDBException e) {
        LOG.error("Failed to put batch={} for window={}", batch, window);
        throw new RuntimeException(e.getMessage());
    }
}
Also used : WriteOptions(org.rocksdb.WriteOptions) RocksDBException(org.rocksdb.RocksDBException) WriteBatch(org.rocksdb.WriteBatch) HashMap(java.util.HashMap) Map(java.util.Map) ColumnFamilyHandle(org.rocksdb.ColumnFamilyHandle)

Example 35 with RocksDBException

use of org.rocksdb.RocksDBException in project jstorm by alibaba.

the class WindowedRocksDbHdfsState method put.

@Override
public void put(TimeWindow window, Object key, Object value) {
    try {
        ColumnFamilyHandle handler = getColumnFamilyHandle(window);
        rocksDb.put(handler, serializer.serialize(key), serializer.serialize(value));
    } catch (RocksDBException e) {
        LOG.error("Failed to put data, key={}, value={}, for timeWindow={}", key, value, window);
        throw new RuntimeException(e.getMessage());
    }
}
Also used : RocksDBException(org.rocksdb.RocksDBException) ColumnFamilyHandle(org.rocksdb.ColumnFamilyHandle)

Aggregations

RocksDBException (org.rocksdb.RocksDBException)66 IOException (java.io.IOException)17 ColumnFamilyHandle (org.rocksdb.ColumnFamilyHandle)17 ProcessorStateException (org.apache.kafka.streams.errors.ProcessorStateException)11 ColumnFamilyDescriptor (org.rocksdb.ColumnFamilyDescriptor)11 ArrayList (java.util.ArrayList)10 WriteBatch (org.rocksdb.WriteBatch)10 HashMap (java.util.HashMap)8 Map (java.util.Map)8 MetricException (org.apache.storm.metricstore.MetricException)8 WriteOptions (org.rocksdb.WriteOptions)7 Options (org.rocksdb.Options)6 File (java.io.File)5 DBOptions (org.rocksdb.DBOptions)5 FlushOptions (org.rocksdb.FlushOptions)5 RocksDB (org.rocksdb.RocksDB)5 DataInputViewStreamWrapper (org.apache.flink.core.memory.DataInputViewStreamWrapper)4 ColumnFamilyOptions (org.rocksdb.ColumnFamilyOptions)4 ReadOptions (org.rocksdb.ReadOptions)4 RocksIterator (org.rocksdb.RocksIterator)4