Search in sources :

Example 1 with RawPayloadTableEntry

use of co.cask.cdap.messaging.store.RawPayloadTableEntry in project cdap by caskdata.

the class LevelDBPayloadTable method read.

@Override
protected CloseableIterator<RawPayloadTableEntry> read(byte[] startRow, byte[] stopRow, final int limit) throws IOException {
    final DBScanIterator iterator = new DBScanIterator(levelDB, startRow, stopRow);
    return new AbstractCloseableIterator<RawPayloadTableEntry>() {

        private final RawPayloadTableEntry tableEntry = new RawPayloadTableEntry();

        private boolean closed = false;

        private int maxLimit = limit;

        @Override
        protected RawPayloadTableEntry computeNext() {
            if (closed || maxLimit <= 0 || (!iterator.hasNext())) {
                return endOfData();
            }
            Map.Entry<byte[], byte[]> row = iterator.next();
            maxLimit--;
            return tableEntry.set(row.getKey(), row.getValue());
        }

        @Override
        public void close() {
            try {
                iterator.close();
            } finally {
                endOfData();
                closed = true;
            }
        }
    };
}
Also used : AbstractCloseableIterator(co.cask.cdap.api.dataset.lib.AbstractCloseableIterator) RawPayloadTableEntry(co.cask.cdap.messaging.store.RawPayloadTableEntry) Map(java.util.Map)

Example 2 with RawPayloadTableEntry

use of co.cask.cdap.messaging.store.RawPayloadTableEntry in project cdap by caskdata.

the class LevelDBPayloadTable method persist.

@Override
public void persist(Iterator<RawPayloadTableEntry> entries) throws IOException {
    try (WriteBatch writeBatch = levelDB.createWriteBatch()) {
        while (entries.hasNext()) {
            RawPayloadTableEntry entry = entries.next();
            byte[] key = entry.getKey();
            byte[] value = entry.getValue();
            // LevelDB doesn't make copies, and since we reuse RawPayloadTableEntry object, we need to create copies.
            writeBatch.put(Arrays.copyOf(key, key.length), Arrays.copyOf(value, value.length));
        }
        levelDB.write(writeBatch, WRITE_OPTIONS);
    } catch (DBException ex) {
        throw new IOException(ex);
    }
}
Also used : DBException(org.iq80.leveldb.DBException) RawPayloadTableEntry(co.cask.cdap.messaging.store.RawPayloadTableEntry) IOException(java.io.IOException) WriteBatch(org.iq80.leveldb.WriteBatch)

Example 3 with RawPayloadTableEntry

use of co.cask.cdap.messaging.store.RawPayloadTableEntry in project cdap by caskdata.

the class HBasePayloadTable method persist.

@Override
public void persist(Iterator<RawPayloadTableEntry> entries) throws IOException {
    List<Put> batchPuts = new ArrayList<>();
    while (entries.hasNext()) {
        RawPayloadTableEntry tableEntry = entries.next();
        Put put = tableUtil.buildPut(rowKeyDistributor.getDistributedKey(tableEntry.getKey())).add(columnFamily, COL, tableEntry.getValue()).build();
        batchPuts.add(put);
    }
    try {
        if (!batchPuts.isEmpty()) {
            hTable.put(batchPuts);
            if (!hTable.isAutoFlush()) {
                hTable.flushCommits();
            }
        }
    } catch (IOException e) {
        throw exceptionHandler.handle(e);
    }
}
Also used : ArrayList(java.util.ArrayList) RawPayloadTableEntry(co.cask.cdap.messaging.store.RawPayloadTableEntry) IOException(java.io.IOException) Put(org.apache.hadoop.hbase.client.Put)

Example 4 with RawPayloadTableEntry

use of co.cask.cdap.messaging.store.RawPayloadTableEntry in project cdap by caskdata.

the class HBasePayloadTable method read.

@Override
public CloseableIterator<RawPayloadTableEntry> read(byte[] startRow, byte[] stopRow, final int limit) throws IOException {
    Scan scan = tableUtil.buildScan().setStartRow(startRow).setStopRow(stopRow).setCaching(scanCacheRows).build();
    final ResultScanner scanner = DistributedScanner.create(hTable, scan, rowKeyDistributor, scanExecutor);
    return new AbstractCloseableIterator<RawPayloadTableEntry>() {

        private final RawPayloadTableEntry tableEntry = new RawPayloadTableEntry();

        private boolean closed = false;

        private int maxLimit = limit;

        @Override
        protected RawPayloadTableEntry computeNext() {
            if (closed || maxLimit <= 0) {
                return endOfData();
            }
            Result result;
            try {
                result = scanner.next();
            } catch (IOException e) {
                throw exceptionHandler.handleAndWrap(e);
            }
            if (result == null) {
                return endOfData();
            }
            maxLimit--;
            return tableEntry.set(rowKeyDistributor.getOriginalKey(result.getRow()), result.getValue(columnFamily, COL));
        }

        @Override
        public void close() {
            try {
                scanner.close();
            } finally {
                endOfData();
                closed = true;
            }
        }
    };
}
Also used : ResultScanner(org.apache.hadoop.hbase.client.ResultScanner) AbstractCloseableIterator(co.cask.cdap.api.dataset.lib.AbstractCloseableIterator) Scan(org.apache.hadoop.hbase.client.Scan) RawPayloadTableEntry(co.cask.cdap.messaging.store.RawPayloadTableEntry) IOException(java.io.IOException) Result(org.apache.hadoop.hbase.client.Result)

Aggregations

RawPayloadTableEntry (co.cask.cdap.messaging.store.RawPayloadTableEntry)4 IOException (java.io.IOException)3 AbstractCloseableIterator (co.cask.cdap.api.dataset.lib.AbstractCloseableIterator)2 ArrayList (java.util.ArrayList)1 Map (java.util.Map)1 Put (org.apache.hadoop.hbase.client.Put)1 Result (org.apache.hadoop.hbase.client.Result)1 ResultScanner (org.apache.hadoop.hbase.client.ResultScanner)1 Scan (org.apache.hadoop.hbase.client.Scan)1 DBException (org.iq80.leveldb.DBException)1 WriteBatch (org.iq80.leveldb.WriteBatch)1