Search in sources :

Example 1 with ByteBufferCell

use of org.apache.hadoop.hbase.ByteBufferCell in project hbase by apache.

the class TestTagCompressionContext method testCompressUncompressTagsWithOffheapKeyValue1.

@Test
public void testCompressUncompressTagsWithOffheapKeyValue1() throws Exception {
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    DataOutputStream daos = new ByteBufferWriterDataOutputStream(baos);
    TagCompressionContext context = new TagCompressionContext(LRUDictionary.class, Byte.MAX_VALUE);
    ByteBufferCell kv1 = (ByteBufferCell) createOffheapKVWithTags(2);
    int tagsLength1 = kv1.getTagsLength();
    context.compressTags(daos, kv1.getTagsByteBuffer(), kv1.getTagsPosition(), tagsLength1);
    ByteBufferCell kv2 = (ByteBufferCell) createOffheapKVWithTags(3);
    int tagsLength2 = kv2.getTagsLength();
    context.compressTags(daos, kv2.getTagsByteBuffer(), kv2.getTagsPosition(), tagsLength2);
    context.clear();
    byte[] dest = new byte[tagsLength1];
    ByteBuffer ob = ByteBuffer.wrap(baos.getBuffer());
    context.uncompressTags(new SingleByteBuff(ob), dest, 0, tagsLength1);
    assertTrue(Bytes.equals(kv1.getTagsArray(), kv1.getTagsOffset(), tagsLength1, dest, 0, tagsLength1));
    dest = new byte[tagsLength2];
    context.uncompressTags(new SingleByteBuff(ob), dest, 0, tagsLength2);
    assertTrue(Bytes.equals(kv2.getTagsArray(), kv2.getTagsOffset(), tagsLength2, dest, 0, tagsLength2));
}
Also used : ByteBufferCell(org.apache.hadoop.hbase.ByteBufferCell) DataOutputStream(java.io.DataOutputStream) SingleByteBuff(org.apache.hadoop.hbase.nio.SingleByteBuff) ByteBuffer(java.nio.ByteBuffer) Test(org.junit.Test)

Example 2 with ByteBufferCell

use of org.apache.hadoop.hbase.ByteBufferCell in project hbase by apache.

the class TestTagCompressionContext method testCompressUncompressTagsWithOffheapKeyValue2.

@Test
public void testCompressUncompressTagsWithOffheapKeyValue2() throws Exception {
    ByteArrayOutputStream baos = new ByteArrayOutputStream();
    DataOutputStream daos = new ByteBufferWriterDataOutputStream(baos);
    TagCompressionContext context = new TagCompressionContext(LRUDictionary.class, Byte.MAX_VALUE);
    ByteBufferCell kv1 = (ByteBufferCell) createOffheapKVWithTags(1);
    int tagsLength1 = kv1.getTagsLength();
    context.compressTags(daos, kv1.getTagsByteBuffer(), kv1.getTagsPosition(), tagsLength1);
    ByteBufferCell kv2 = (ByteBufferCell) createOffheapKVWithTags(3);
    int tagsLength2 = kv2.getTagsLength();
    context.compressTags(daos, kv2.getTagsByteBuffer(), kv2.getTagsPosition(), tagsLength2);
    context.clear();
    ByteArrayInputStream bais = new ByteArrayInputStream(baos.getBuffer());
    byte[] dest = new byte[tagsLength1];
    context.uncompressTags(bais, dest, 0, tagsLength1);
    assertTrue(Bytes.equals(kv1.getTagsArray(), kv1.getTagsOffset(), tagsLength1, dest, 0, tagsLength1));
    dest = new byte[tagsLength2];
    context.uncompressTags(bais, dest, 0, tagsLength2);
    assertTrue(Bytes.equals(kv2.getTagsArray(), kv2.getTagsOffset(), tagsLength2, dest, 0, tagsLength2));
}
Also used : ByteBufferCell(org.apache.hadoop.hbase.ByteBufferCell) ByteArrayInputStream(java.io.ByteArrayInputStream) DataOutputStream(java.io.DataOutputStream) Test(org.junit.Test)

Example 3 with ByteBufferCell

use of org.apache.hadoop.hbase.ByteBufferCell in project hbase by apache.

the class RSRpcServices method addSize.

/**
   * Method to account for the size of retained cells and retained data blocks.
   * @return an object that represents the last referenced block from this response.
   */
Object addSize(RpcCallContext context, Result r, Object lastBlock) {
    if (context != null && r != null && !r.isEmpty()) {
        for (Cell c : r.rawCells()) {
            context.incrementResponseCellSize(CellUtil.estimatedSerializedSizeOf(c));
            // So we make a guess.
            if (c instanceof ByteBufferCell) {
                ByteBufferCell bbCell = (ByteBufferCell) c;
                ByteBuffer bb = bbCell.getValueByteBuffer();
                if (bb != lastBlock) {
                    context.incrementResponseBlockSize(bb.capacity());
                    lastBlock = bb;
                }
            } else {
                // We're using the last block being the same as the current block as
                // a proxy for pointing to a new block. This won't be exact.
                // If there are multiple gets that bounce back and forth
                // Then it's possible that this will over count the size of
                // referenced blocks. However it's better to over count and
                // use two rpcs than to OOME the regionserver.
                byte[] valueArray = c.getValueArray();
                if (valueArray != lastBlock) {
                    context.incrementResponseBlockSize(valueArray.length);
                    lastBlock = valueArray;
                }
            }
        }
    }
    return lastBlock;
}
Also used : ByteBufferCell(org.apache.hadoop.hbase.ByteBufferCell) Cell(org.apache.hadoop.hbase.Cell) ByteBufferCell(org.apache.hadoop.hbase.ByteBufferCell) ByteBuffer(java.nio.ByteBuffer)

Aggregations

ByteBufferCell (org.apache.hadoop.hbase.ByteBufferCell)3 DataOutputStream (java.io.DataOutputStream)2 ByteBuffer (java.nio.ByteBuffer)2 Test (org.junit.Test)2 ByteArrayInputStream (java.io.ByteArrayInputStream)1 Cell (org.apache.hadoop.hbase.Cell)1 SingleByteBuff (org.apache.hadoop.hbase.nio.SingleByteBuff)1