Search in sources :

Example 1 with ByteBuf

use of org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf in project hbase by apache.

the class NettyRpcFrameDecoder method getHeader.

private RPCProtos.RequestHeader getHeader(ByteBuf in, int headerSize) throws IOException {
    ByteBuf msg = in.readRetainedSlice(headerSize);
    try {
        byte[] array;
        int offset;
        int length = msg.readableBytes();
        if (msg.hasArray()) {
            array = msg.array();
            offset = msg.arrayOffset() + msg.readerIndex();
        } else {
            array = new byte[length];
            msg.getBytes(msg.readerIndex(), array, 0, length);
            offset = 0;
        }
        RPCProtos.RequestHeader.Builder builder = RPCProtos.RequestHeader.newBuilder();
        ProtobufUtil.mergeFrom(builder, array, offset, length);
        return builder.build();
    } finally {
        msg.release();
    }
}
Also used : ByteBuf(org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf)

Example 2 with ByteBuf

use of org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf in project hbase by apache.

the class NettyRpcServerRequestDecoder method channelRead.

@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
    ByteBuf input = (ByteBuf) msg;
    // 4 bytes length field
    metrics.receivedBytes(input.readableBytes() + 4);
    connection.process(input);
}
Also used : ByteBuf(org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf)

Example 3 with ByteBuf

use of org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf in project hbase by apache.

the class SaslWrapHandler method flush.

@Override
public void flush(ChannelHandlerContext ctx) throws Exception {
    if (queue.isEmpty()) {
        return;
    }
    ByteBuf buf = null;
    try {
        ChannelPromise promise = ctx.newPromise();
        int readableBytes = queue.readableBytes();
        buf = queue.remove(readableBytes, promise);
        byte[] bytes = new byte[readableBytes];
        buf.readBytes(bytes);
        byte[] wrapperBytes = saslClient.wrap(bytes, 0, bytes.length);
        ChannelPromise lenPromise = ctx.newPromise();
        ctx.write(ctx.alloc().buffer(4).writeInt(wrapperBytes.length), lenPromise);
        ChannelPromise contentPromise = ctx.newPromise();
        ctx.write(Unpooled.wrappedBuffer(wrapperBytes), contentPromise);
        PromiseCombiner combiner = new PromiseCombiner();
        combiner.addAll(lenPromise, contentPromise);
        combiner.finish(promise);
        ctx.flush();
    } finally {
        if (buf != null) {
            ReferenceCountUtil.safeRelease(buf);
        }
    }
}
Also used : PromiseCombiner(org.apache.hbase.thirdparty.io.netty.util.concurrent.PromiseCombiner) ChannelPromise(org.apache.hbase.thirdparty.io.netty.channel.ChannelPromise) ByteBuf(org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf)

Example 4 with ByteBuf

use of org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf in project hbase by apache.

the class FanOutOneBlockAsyncDFSOutputHelper method requestWriteBlock.

private static void requestWriteBlock(Channel channel, StorageType storageType, OpWriteBlockProto.Builder writeBlockProtoBuilder) throws IOException {
    OpWriteBlockProto proto = writeBlockProtoBuilder.setStorageType(PBHelperClient.convertStorageType(storageType)).build();
    int protoLen = proto.getSerializedSize();
    ByteBuf buffer = channel.alloc().buffer(3 + CodedOutputStream.computeRawVarint32Size(protoLen) + protoLen);
    buffer.writeShort(DataTransferProtocol.DATA_TRANSFER_VERSION);
    buffer.writeByte(Op.WRITE_BLOCK.code);
    proto.writeDelimitedTo(new ByteBufOutputStream(buffer));
    channel.writeAndFlush(buffer);
}
Also used : ByteBufOutputStream(org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream) OpWriteBlockProto(org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto) ByteBuf(org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf)

Example 5 with ByteBuf

use of org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf in project hbase by apache.

the class TestFanOutOneBlockAsyncDFSOutputHang method testFlushHangWhenOneDataNodeFailedBeforeOtherDataNodeAck.

/**
 * <pre>
 * This test is for HBASE-26679. Consider there are two dataNodes: dn1 and dn2,dn2 is a slow DN.
 * The threads sequence before HBASE-26679 is:
 * 1.We write some data to {@link FanOutOneBlockAsyncDFSOutput} and then flush it, there are one
 *   {@link FanOutOneBlockAsyncDFSOutput.Callback} in
 *   {@link FanOutOneBlockAsyncDFSOutput#waitingAckQueue}.
 * 2.The ack from dn1 arrives firstly and triggers Netty to invoke
 *   {@link FanOutOneBlockAsyncDFSOutput#completed} with dn1's channel, then in
 *   {@link FanOutOneBlockAsyncDFSOutput#completed}, dn1's channel is removed from
 *   {@link FanOutOneBlockAsyncDFSOutput.Callback#unfinishedReplicas}.
 * 3.But dn2 responds slowly, before dn2 sending ack,dn1 is shut down or have a exception,
 *   so {@link FanOutOneBlockAsyncDFSOutput#failed} is triggered by Netty with dn1's channel,
 *   and because the {@link FanOutOneBlockAsyncDFSOutput.Callback#unfinishedReplicas} does not
 *   contain dn1's channel,the {@link FanOutOneBlockAsyncDFSOutput.Callback} is skipped in
 *   {@link FanOutOneBlockAsyncDFSOutput#failed} method,and
 *   {@link FanOutOneBlockAsyncDFSOutput#state} is set to
 *   {@link FanOutOneBlockAsyncDFSOutput.State#BROKEN},and dn1,dn2 are all closed at the end of
 *   {@link FanOutOneBlockAsyncDFSOutput#failed}.
 * 4.{@link FanOutOneBlockAsyncDFSOutput#failed} is triggered again by dn2 because it is closed,
 *   but because {@link FanOutOneBlockAsyncDFSOutput#state} is already
 *   {@link FanOutOneBlockAsyncDFSOutput.State#BROKEN},the whole
 *   {@link FanOutOneBlockAsyncDFSOutput#failed} is skipped. So wait on the future
 *   returned by {@link FanOutOneBlockAsyncDFSOutput#flush} would be stuck for ever.
 * After HBASE-26679, for above step 4,even if the {@link FanOutOneBlockAsyncDFSOutput#state}
 * is already {@link FanOutOneBlockAsyncDFSOutput.State#BROKEN}, we would still try to trigger
 * {@link FanOutOneBlockAsyncDFSOutput.Callback#future}.
 * </pre>
 */
@Test
public void testFlushHangWhenOneDataNodeFailedBeforeOtherDataNodeAck() throws Exception {
    DataNodeProperties firstDataNodeProperties = null;
    try {
        final CyclicBarrier dn1AckReceivedCyclicBarrier = new CyclicBarrier(2);
        Map<Channel, DatanodeInfo> datanodeInfoMap = OUT.getDatanodeInfoMap();
        Iterator<Map.Entry<Channel, DatanodeInfo>> iterator = datanodeInfoMap.entrySet().iterator();
        assertTrue(iterator.hasNext());
        Map.Entry<Channel, DatanodeInfo> dn1Entry = iterator.next();
        Channel dn1Channel = dn1Entry.getKey();
        DatanodeInfo dn1DatanodeInfo = dn1Entry.getValue();
        final List<String> protobufDecoderNames = new ArrayList<String>();
        dn1Channel.pipeline().forEach((entry) -> {
            if (ProtobufDecoder.class.isInstance(entry.getValue())) {
                protobufDecoderNames.add(entry.getKey());
            }
        });
        assertTrue(protobufDecoderNames.size() == 1);
        dn1Channel.pipeline().addAfter(protobufDecoderNames.get(0), "dn1AckReceivedHandler", new ChannelInboundHandlerAdapter() {

            @Override
            public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
                super.channelRead(ctx, msg);
                dn1AckReceivedCyclicBarrier.await();
            }
        });
        assertTrue(iterator.hasNext());
        Map.Entry<Channel, DatanodeInfo> dn2Entry = iterator.next();
        Channel dn2Channel = dn2Entry.getKey();
        /**
         * Here we add a {@link ChannelInboundHandlerAdapter} to eat all the responses to simulate a
         * slow dn2.
         */
        dn2Channel.pipeline().addFirst(new ChannelInboundHandlerAdapter() {

            @Override
            public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
                if (!(msg instanceof ByteBuf)) {
                    ctx.fireChannelRead(msg);
                }
            }
        });
        byte[] b = new byte[10];
        ThreadLocalRandom.current().nextBytes(b);
        OUT.write(b, 0, b.length);
        CompletableFuture<Long> future = OUT.flush(false);
        /**
         * Wait for ack from dn1.
         */
        dn1AckReceivedCyclicBarrier.await();
        /**
         * First ack is received from dn1,we could stop dn1 now.
         */
        firstDataNodeProperties = findAndKillFirstDataNode(dn1DatanodeInfo);
        assertTrue(firstDataNodeProperties != null);
        try {
            /**
             * Before HBASE-26679,here we should be stuck, after HBASE-26679,we would fail soon with
             * {@link ExecutionException}.
             */
            future.get();
            fail();
        } catch (ExecutionException e) {
            assertTrue(e != null);
            LOG.info("expected exception caught when get future", e);
        }
        /**
         * Make sure all the data node channel are closed.
         */
        datanodeInfoMap.keySet().forEach(ch -> {
            try {
                ch.closeFuture().get();
            } catch (InterruptedException | ExecutionException e) {
                throw new RuntimeException(e);
            }
        });
    } finally {
        if (firstDataNodeProperties != null) {
            CLUSTER.restartDataNode(firstDataNodeProperties);
        }
    }
}
Also used : DataNodeProperties(org.apache.hadoop.hdfs.MiniDFSCluster.DataNodeProperties) ArrayList(java.util.ArrayList) ChannelHandlerContext(org.apache.hbase.thirdparty.io.netty.channel.ChannelHandlerContext) ByteBuf(org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf) ExecutionException(java.util.concurrent.ExecutionException) DatanodeInfo(org.apache.hadoop.hdfs.protocol.DatanodeInfo) Channel(org.apache.hbase.thirdparty.io.netty.channel.Channel) NioSocketChannel(org.apache.hbase.thirdparty.io.netty.channel.socket.nio.NioSocketChannel) IOException(java.io.IOException) ExecutionException(java.util.concurrent.ExecutionException) CyclicBarrier(java.util.concurrent.CyclicBarrier) Map(java.util.Map) ChannelInboundHandlerAdapter(org.apache.hbase.thirdparty.io.netty.channel.ChannelInboundHandlerAdapter) Test(org.junit.Test)

Aggregations

ByteBuf (org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf)12 IOException (java.io.IOException)5 InterruptedIOException (java.io.InterruptedIOException)4 ChannelPromise (org.apache.hbase.thirdparty.io.netty.channel.ChannelPromise)3 PromiseCombiner (org.apache.hbase.thirdparty.io.netty.util.concurrent.PromiseCombiner)3 ExecutionException (java.util.concurrent.ExecutionException)2 CellBlockMeta (org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.CellBlockMeta)2 RequestHeader (org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader)2 PacketHeader (org.apache.hadoop.hdfs.protocol.datatransfer.PacketHeader)2 ByteBufOutputStream (org.apache.hbase.thirdparty.io.netty.buffer.ByteBufOutputStream)2 ArrayList (java.util.ArrayList)1 Map (java.util.Map)1 CompletableFuture (java.util.concurrent.CompletableFuture)1 CyclicBarrier (java.util.concurrent.CyclicBarrier)1 DoNotRetryIOException (org.apache.hadoop.hbase.DoNotRetryIOException)1 IPCUtil.buildRequestHeader (org.apache.hadoop.hbase.ipc.IPCUtil.buildRequestHeader)1 DataNodeProperties (org.apache.hadoop.hdfs.MiniDFSCluster.DataNodeProperties)1 DatanodeInfo (org.apache.hadoop.hdfs.protocol.DatanodeInfo)1 OpWriteBlockProto (org.apache.hadoop.hdfs.protocol.proto.DataTransferProtos.OpWriteBlockProto)1 Channel (org.apache.hbase.thirdparty.io.netty.channel.Channel)1