Search in sources :

Example 1 with Codec

use of org.apache.hadoop.hbase.codec.Codec in project hbase by apache.

the class CellBlockBuilder method encodeCellsTo.

private void encodeCellsTo(OutputStream os, CellScanner cellScanner, Codec codec, CompressionCodec compressor) throws IOException {
    Compressor poolCompressor = null;
    try {
        if (compressor != null) {
            if (compressor instanceof Configurable) {
                ((Configurable) compressor).setConf(this.conf);
            }
            poolCompressor = CodecPool.getCompressor(compressor);
            os = compressor.createOutputStream(os, poolCompressor);
        }
        Codec.Encoder encoder = codec.getEncoder(os);
        while (cellScanner.advance()) {
            encoder.write(cellScanner.current());
        }
        encoder.flush();
    } catch (BufferOverflowException | IndexOutOfBoundsException e) {
        throw new DoNotRetryIOException(e);
    } finally {
        os.close();
        if (poolCompressor != null) {
            CodecPool.returnCompressor(poolCompressor);
        }
    }
}
Also used : Codec(org.apache.hadoop.hbase.codec.Codec) CompressionCodec(org.apache.hadoop.io.compress.CompressionCodec) DoNotRetryIOException(org.apache.hadoop.hbase.DoNotRetryIOException) Compressor(org.apache.hadoop.io.compress.Compressor) Configurable(org.apache.hadoop.conf.Configurable) BufferOverflowException(java.nio.BufferOverflowException)

Example 2 with Codec

use of org.apache.hadoop.hbase.codec.Codec in project hbase by apache.

the class CodecPerformance method doCodec.

static void doCodec(final Codec codec, final Cell[] cells, final int cycles, final int count, final int initialBufferSize) throws IOException {
    byte[] bytes = null;
    Cell[] cellsDecoded = null;
    for (int i = 0; i < cycles; i++) {
        ByteArrayOutputStream baos = new ByteArrayOutputStream(initialBufferSize);
        Codec.Encoder encoder = codec.getEncoder(baos);
        bytes = runEncoderTest(i, initialBufferSize, baos, encoder, cells);
    }
    for (int i = 0; i < cycles; i++) {
        ByteArrayInputStream bais = new ByteArrayInputStream(bytes);
        Codec.Decoder decoder = codec.getDecoder(bais);
        cellsDecoded = CodecPerformance.runDecoderTest(i, count, decoder);
    }
    verifyCells(cells, cellsDecoded);
}
Also used : CellCodec(org.apache.hadoop.hbase.codec.CellCodec) Codec(org.apache.hadoop.hbase.codec.Codec) MessageCodec(org.apache.hadoop.hbase.codec.MessageCodec) KeyValueCodec(org.apache.hadoop.hbase.codec.KeyValueCodec) ByteArrayInputStream(java.io.ByteArrayInputStream) ByteArrayOutputStream(java.io.ByteArrayOutputStream) Cell(org.apache.hadoop.hbase.Cell)

Example 3 with Codec

use of org.apache.hadoop.hbase.codec.Codec in project phoenix by apache.

the class ReadWriteKeyValuesWithCodecIT method writeWALEdit.

private void writeWALEdit(WALCellCodec codec, List<Cell> kvs, FSDataOutputStream out) throws IOException {
    out.writeInt(kvs.size());
    Codec.Encoder cellEncoder = codec.getEncoder(out);
    // We interleave the two lists for code simplicity
    for (Cell kv : kvs) {
        cellEncoder.write(kv);
    }
}
Also used : Codec(org.apache.hadoop.hbase.codec.Codec) Cell(org.apache.hadoop.hbase.Cell)

Aggregations

Codec (org.apache.hadoop.hbase.codec.Codec)3 Cell (org.apache.hadoop.hbase.Cell)2 ByteArrayInputStream (java.io.ByteArrayInputStream)1 ByteArrayOutputStream (java.io.ByteArrayOutputStream)1 BufferOverflowException (java.nio.BufferOverflowException)1 Configurable (org.apache.hadoop.conf.Configurable)1 DoNotRetryIOException (org.apache.hadoop.hbase.DoNotRetryIOException)1 CellCodec (org.apache.hadoop.hbase.codec.CellCodec)1 KeyValueCodec (org.apache.hadoop.hbase.codec.KeyValueCodec)1 MessageCodec (org.apache.hadoop.hbase.codec.MessageCodec)1 CompressionCodec (org.apache.hadoop.io.compress.CompressionCodec)1 Compressor (org.apache.hadoop.io.compress.Compressor)1