Search in sources :

Example 1 with RequestHeader

use of org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader in project hbase by apache.

the class SimpleServerRpcConnection method readAndProcess.

/**
 * Read off the wire. If there is not enough data to read, update the connection state with what
 * we have and returns.
 * @return Returns -1 if failure (and caller will close connection), else zero or more.
 * @throws IOException
 * @throws InterruptedException
 */
public int readAndProcess() throws IOException, InterruptedException {
    // If we have not read the connection setup preamble, look to see if that is on the wire.
    if (!connectionPreambleRead) {
        int count = readPreamble();
        if (!connectionPreambleRead) {
            return count;
        }
    }
    // Try and read in an int. it will be length of the data to read (or -1 if a ping). We catch the
    // integer length into the 4-byte this.dataLengthBuffer.
    int count = read4Bytes();
    if (count < 0 || dataLengthBuffer.remaining() > 0) {
        return count;
    }
    // or it is a request.
    if (data == null) {
        dataLengthBuffer.flip();
        int dataLength = dataLengthBuffer.getInt();
        if (dataLength == RpcClient.PING_CALL_ID) {
            if (!useWrap) {
                // covers the !useSasl too
                dataLengthBuffer.clear();
                // ping message
                return 0;
            }
        }
        if (dataLength < 0) {
            // A data length of zero is legal.
            throw new DoNotRetryIOException("Unexpected data length " + dataLength + "!! from " + getHostAddress());
        }
        if (dataLength > this.rpcServer.maxRequestSize) {
            String msg = "RPC data length of " + dataLength + " received from " + getHostAddress() + " is greater than max allowed " + this.rpcServer.maxRequestSize + ". Set \"" + SimpleRpcServer.MAX_REQUEST_SIZE + "\" on server to override this limit (not recommended)";
            SimpleRpcServer.LOG.warn(msg);
            if (connectionHeaderRead && connectionPreambleRead) {
                incRpcCount();
                // Construct InputStream for the non-blocking SocketChannel
                // We need the InputStream because we want to read only the request header
                // instead of the whole rpc.
                ByteBuffer buf = ByteBuffer.allocate(1);
                InputStream is = new InputStream() {

                    @Override
                    public int read() throws IOException {
                        SimpleServerRpcConnection.this.rpcServer.channelRead(channel, buf);
                        buf.flip();
                        int x = buf.get();
                        buf.flip();
                        return x;
                    }
                };
                CodedInputStream cis = CodedInputStream.newInstance(is);
                int headerSize = cis.readRawVarint32();
                Message.Builder builder = RequestHeader.newBuilder();
                ProtobufUtil.mergeFrom(builder, cis, headerSize);
                RequestHeader header = (RequestHeader) builder.build();
                // Notify the client about the offending request
                SimpleServerCall reqTooBig = new SimpleServerCall(header.getCallId(), this.service, null, null, null, null, this, 0, this.addr, EnvironmentEdgeManager.currentTime(), 0, this.rpcServer.bbAllocator, this.rpcServer.cellBlockBuilder, null, responder);
                RequestTooBigException reqTooBigEx = new RequestTooBigException(msg);
                this.rpcServer.metrics.exception(reqTooBigEx);
                // Otherwise, throw a DoNotRetryIOException.
                if (VersionInfoUtil.hasMinimumVersion(connectionHeader.getVersionInfo(), RequestTooBigException.MAJOR_VERSION, RequestTooBigException.MINOR_VERSION)) {
                    reqTooBig.setResponse(null, null, reqTooBigEx, msg);
                } else {
                    reqTooBig.setResponse(null, null, new DoNotRetryIOException(msg), msg);
                }
                // In most cases we will write out the response directly. If not, it is still OK to just
                // close the connection without writing out the reqTooBig response. Do not try to write
                // out directly here, and it will cause deserialization error if the connection is slow
                // and we have a half writing response in the queue.
                reqTooBig.sendResponseIfReady();
            }
            // Close the connection
            return -1;
        }
        // Initialize this.data with a ByteBuff.
        // This call will allocate a ByteBuff to read request into and assign to this.data
        // Also when we use some buffer(s) from pool, it will create a CallCleanup instance also and
        // assign to this.callCleanup
        initByteBuffToReadInto(dataLength);
        // Increment the rpc count. This counter will be decreased when we write
        // the response. If we want the connection to be detected as idle properly, we
        // need to keep the inc / dec correct.
        incRpcCount();
    }
    count = channelDataRead(channel, data);
    if (count >= 0 && data.remaining() == 0) {
        // count==0 if dataLength == 0
        process();
    }
    return count;
}
Also used : Message(org.apache.hbase.thirdparty.com.google.protobuf.Message) DoNotRetryIOException(org.apache.hadoop.hbase.DoNotRetryIOException) CodedInputStream(org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream) InputStream(java.io.InputStream) CodedInputStream(org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream) RequestTooBigException(org.apache.hadoop.hbase.exceptions.RequestTooBigException) RequestHeader(org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader) ByteBuffer(java.nio.ByteBuffer)

Example 2 with RequestHeader

use of org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader in project hbase by apache.

the class TestPriorityRpc method testQosFunctionWithoutKnownArgument.

@Test
public void testQosFunctionWithoutKnownArgument() throws IOException {
    // The request is not using any of the
    // known argument classes (it uses one random request class)
    // (known argument classes are listed in
    // HRegionServer.QosFunctionImpl.knownArgumentClasses)
    RequestHeader.Builder headerBuilder = RequestHeader.newBuilder();
    headerBuilder.setMethodName("foo");
    RequestHeader header = headerBuilder.build();
    RSRpcServices mockRpc = mock(RSRpcServices.class);
    when(mockRpc.getConfiguration()).thenReturn(CONF);
    RSAnnotationReadingPriorityFunction qosFunc = new RSAnnotationReadingPriorityFunction(mockRpc);
    assertEquals(HConstants.NORMAL_QOS, qosFunc.getPriority(header, null, createSomeUser()));
}
Also used : RequestHeader(org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader) Test(org.junit.Test)

Example 3 with RequestHeader

use of org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader in project hbase by apache.

the class TestSimpleRpcScheduler method testPluggableRpcQueueCanListenToConfigurationChanges.

@Test
public void testPluggableRpcQueueCanListenToConfigurationChanges() throws Exception {
    Configuration schedConf = HBaseConfiguration.create();
    schedConf.setInt(HConstants.REGION_SERVER_HANDLER_COUNT, 2);
    schedConf.setInt("hbase.ipc.server.max.callqueue.length", 5);
    schedConf.set(RpcExecutor.CALL_QUEUE_TYPE_CONF_KEY, RpcExecutor.CALL_QUEUE_TYPE_PLUGGABLE_CONF_VALUE);
    schedConf.set(RpcExecutor.PLUGGABLE_CALL_QUEUE_CLASS_NAME, "org.apache.hadoop.hbase.ipc.TestPluggableQueueImpl");
    PriorityFunction priority = mock(PriorityFunction.class);
    when(priority.getPriority(any(), any(), any())).thenReturn(HConstants.NORMAL_QOS);
    SimpleRpcScheduler scheduler = new SimpleRpcScheduler(schedConf, 0, 0, 0, priority, HConstants.QOS_THRESHOLD);
    try {
        scheduler.start();
        CallRunner putCallTask = mock(CallRunner.class);
        ServerCall putCall = mock(ServerCall.class);
        putCall.param = RequestConverter.buildMutateRequest(Bytes.toBytes("abc"), new Put(Bytes.toBytes("row")));
        RequestHeader putHead = RequestHeader.newBuilder().setMethodName("mutate").build();
        when(putCallTask.getRpcCall()).thenReturn(putCall);
        when(putCall.getHeader()).thenReturn(putHead);
        assertTrue(scheduler.dispatch(putCallTask));
        schedConf.setInt("hbase.ipc.server.max.callqueue.length", 4);
        scheduler.onConfigurationChange(schedConf);
        assertTrue(TestPluggableQueueImpl.hasObservedARecentConfigurationChange());
        waitUntilQueueEmpty(scheduler);
    } finally {
        scheduler.stop();
    }
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) HBaseConfiguration(org.apache.hadoop.hbase.HBaseConfiguration) RequestHeader(org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader) Put(org.apache.hadoop.hbase.client.Put) Test(org.junit.Test)

Example 4 with RequestHeader

use of org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader in project hbase by apache.

the class TestSimpleRpcScheduler method testMetaRWScanQueues.

@Test
public void testMetaRWScanQueues() throws Exception {
    Configuration schedConf = HBaseConfiguration.create();
    schedConf.setFloat(RpcExecutor.CALL_QUEUE_HANDLER_FACTOR_CONF_KEY, 1.0f);
    schedConf.setFloat(MetaRWQueueRpcExecutor.META_CALL_QUEUE_READ_SHARE_CONF_KEY, 0.7f);
    schedConf.setFloat(MetaRWQueueRpcExecutor.META_CALL_QUEUE_SCAN_SHARE_CONF_KEY, 0.5f);
    PriorityFunction priority = mock(PriorityFunction.class);
    when(priority.getPriority(any(), any(), any())).thenReturn(HConstants.HIGH_QOS);
    RpcScheduler scheduler = new SimpleRpcScheduler(schedConf, 3, 3, 1, priority, HConstants.QOS_THRESHOLD);
    try {
        scheduler.start();
        CallRunner putCallTask = mock(CallRunner.class);
        ServerCall putCall = mock(ServerCall.class);
        putCall.param = RequestConverter.buildMutateRequest(Bytes.toBytes("abc"), new Put(Bytes.toBytes("row")));
        RequestHeader putHead = RequestHeader.newBuilder().setMethodName("mutate").build();
        when(putCallTask.getRpcCall()).thenReturn(putCall);
        when(putCall.getHeader()).thenReturn(putHead);
        when(putCall.getParam()).thenReturn(putCall.param);
        CallRunner getCallTask = mock(CallRunner.class);
        ServerCall getCall = mock(ServerCall.class);
        RequestHeader getHead = RequestHeader.newBuilder().setMethodName("get").build();
        when(getCallTask.getRpcCall()).thenReturn(getCall);
        when(getCall.getHeader()).thenReturn(getHead);
        CallRunner scanCallTask = mock(CallRunner.class);
        ServerCall scanCall = mock(ServerCall.class);
        scanCall.param = ScanRequest.newBuilder().build();
        RequestHeader scanHead = RequestHeader.newBuilder().setMethodName("scan").build();
        when(scanCallTask.getRpcCall()).thenReturn(scanCall);
        when(scanCall.getHeader()).thenReturn(scanHead);
        when(scanCall.getParam()).thenReturn(scanCall.param);
        ArrayList<Integer> work = new ArrayList<>();
        doAnswerTaskExecution(putCallTask, work, 1, 1000);
        doAnswerTaskExecution(getCallTask, work, 2, 1000);
        doAnswerTaskExecution(scanCallTask, work, 3, 1000);
        // There are 3 queues: [puts], [gets], [scans]
        // so the calls will be interleaved
        scheduler.dispatch(putCallTask);
        scheduler.dispatch(putCallTask);
        scheduler.dispatch(putCallTask);
        scheduler.dispatch(getCallTask);
        scheduler.dispatch(getCallTask);
        scheduler.dispatch(getCallTask);
        scheduler.dispatch(scanCallTask);
        scheduler.dispatch(scanCallTask);
        scheduler.dispatch(scanCallTask);
        while (work.size() < 6) {
            Thread.sleep(100);
        }
        for (int i = 0; i < work.size() - 2; i += 3) {
            assertNotEquals(work.get(i + 0), work.get(i + 1));
            assertNotEquals(work.get(i + 0), work.get(i + 2));
            assertNotEquals(work.get(i + 1), work.get(i + 2));
        }
    } finally {
        scheduler.stop();
    }
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) HBaseConfiguration(org.apache.hadoop.hbase.HBaseConfiguration) ArrayList(java.util.ArrayList) RequestHeader(org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader) Put(org.apache.hadoop.hbase.client.Put) Test(org.junit.Test)

Example 5 with RequestHeader

use of org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader in project hbase by apache.

the class ServerRpcConnection method processRequest.

/**
 * @param buf
 *          Has the request header and the request param and optionally
 *          encoded data buffer all in this one array.
 * @throws IOException
 * @throws InterruptedException
 */
protected void processRequest(ByteBuff buf) throws IOException, InterruptedException {
    long totalRequestSize = buf.limit();
    int offset = 0;
    // Here we read in the header. We avoid having pb
    // do its default 4k allocation for CodedInputStream. We force it to use
    // backing array.
    CodedInputStream cis;
    if (buf.hasArray()) {
        cis = UnsafeByteOperations.unsafeWrap(buf.array(), 0, buf.limit()).newCodedInput();
    } else {
        cis = UnsafeByteOperations.unsafeWrap(new ByteBuffByteInput(buf, 0, buf.limit()), 0, buf.limit()).newCodedInput();
    }
    cis.enableAliasing(true);
    int headerSize = cis.readRawVarint32();
    offset = cis.getTotalBytesRead();
    Message.Builder builder = RequestHeader.newBuilder();
    ProtobufUtil.mergeFrom(builder, cis, headerSize);
    RequestHeader header = (RequestHeader) builder.build();
    offset += headerSize;
    TextMapGetter<RPCTInfo> getter = new TextMapGetter<RPCTInfo>() {

        @Override
        public Iterable<String> keys(RPCTInfo carrier) {
            return carrier.getHeadersMap().keySet();
        }

        @Override
        public String get(RPCTInfo carrier, String key) {
            return carrier.getHeadersMap().get(key);
        }
    };
    Context traceCtx = GlobalOpenTelemetry.getPropagators().getTextMapPropagator().extract(Context.current(), header.getTraceInfo(), getter);
    Span span = TraceUtil.createRemoteSpan("RpcServer.process", traceCtx);
    try (Scope scope = span.makeCurrent()) {
        int id = header.getCallId();
        if (RpcServer.LOG.isTraceEnabled()) {
            RpcServer.LOG.trace("RequestHeader " + TextFormat.shortDebugString(header) + " totalRequestSize: " + totalRequestSize + " bytes");
        }
        // total request.
        if ((totalRequestSize + this.rpcServer.callQueueSizeInBytes.sum()) > this.rpcServer.maxQueueSizeInBytes) {
            final ServerCall<?> callTooBig = createCall(id, this.service, null, null, null, null, totalRequestSize, null, 0, this.callCleanup);
            this.rpcServer.metrics.exception(RpcServer.CALL_QUEUE_TOO_BIG_EXCEPTION);
            callTooBig.setResponse(null, null, RpcServer.CALL_QUEUE_TOO_BIG_EXCEPTION, "Call queue is full on " + this.rpcServer.server.getServerName() + ", is hbase.ipc.server.max.callqueue.size too small?");
            callTooBig.sendResponseIfReady();
            return;
        }
        MethodDescriptor md = null;
        Message param = null;
        CellScanner cellScanner = null;
        try {
            if (header.hasRequestParam() && header.getRequestParam()) {
                md = this.service.getDescriptorForType().findMethodByName(header.getMethodName());
                if (md == null) {
                    throw new UnsupportedOperationException(header.getMethodName());
                }
                builder = this.service.getRequestPrototype(md).newBuilderForType();
                cis.resetSizeCounter();
                int paramSize = cis.readRawVarint32();
                offset += cis.getTotalBytesRead();
                if (builder != null) {
                    ProtobufUtil.mergeFrom(builder, cis, paramSize);
                    param = builder.build();
                }
                offset += paramSize;
            } else {
                // currently header must have request param, so we directly throw
                // exception here
                String msg = "Invalid request header: " + TextFormat.shortDebugString(header) + ", should have param set in it";
                RpcServer.LOG.warn(msg);
                throw new DoNotRetryIOException(msg);
            }
            if (header.hasCellBlockMeta()) {
                buf.position(offset);
                ByteBuff dup = buf.duplicate();
                dup.limit(offset + header.getCellBlockMeta().getLength());
                cellScanner = this.rpcServer.cellBlockBuilder.createCellScannerReusingBuffers(this.codec, this.compressionCodec, dup);
            }
        } catch (Throwable t) {
            InetSocketAddress address = this.rpcServer.getListenerAddress();
            String msg = (address != null ? address : "(channel closed)") + " is unable to read call parameter from client " + getHostAddress();
            RpcServer.LOG.warn(msg, t);
            this.rpcServer.metrics.exception(t);
            // version
            if (t instanceof LinkageError) {
                t = new DoNotRetryIOException(t);
            }
            // If the method is not present on the server, do not retry.
            if (t instanceof UnsupportedOperationException) {
                t = new DoNotRetryIOException(t);
            }
            ServerCall<?> readParamsFailedCall = createCall(id, this.service, null, null, null, null, totalRequestSize, null, 0, this.callCleanup);
            readParamsFailedCall.setResponse(null, null, t, msg + "; " + t.getMessage());
            readParamsFailedCall.sendResponseIfReady();
            return;
        }
        int timeout = 0;
        if (header.hasTimeout() && header.getTimeout() > 0) {
            timeout = Math.max(this.rpcServer.minClientRequestTimeout, header.getTimeout());
        }
        ServerCall<?> call = createCall(id, this.service, md, header, param, cellScanner, totalRequestSize, this.addr, timeout, this.callCleanup);
        if (!this.rpcServer.scheduler.dispatch(new CallRunner(this.rpcServer, call))) {
            this.rpcServer.callQueueSizeInBytes.add(-1 * call.getSize());
            this.rpcServer.metrics.exception(RpcServer.CALL_QUEUE_TOO_BIG_EXCEPTION);
            call.setResponse(null, null, RpcServer.CALL_QUEUE_TOO_BIG_EXCEPTION, "Call queue is full on " + this.rpcServer.server.getServerName() + ", too many items queued ?");
            call.sendResponseIfReady();
        }
    }
}
Also used : Message(org.apache.hbase.thirdparty.com.google.protobuf.Message) DoNotRetryIOException(org.apache.hadoop.hbase.DoNotRetryIOException) CodedInputStream(org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream) InetSocketAddress(java.net.InetSocketAddress) ByteString(org.apache.hbase.thirdparty.com.google.protobuf.ByteString) Span(io.opentelemetry.api.trace.Span) CellScanner(org.apache.hadoop.hbase.CellScanner) RPCTInfo(org.apache.hadoop.hbase.shaded.protobuf.generated.TracingProtos.RPCTInfo) SingleByteBuff(org.apache.hadoop.hbase.nio.SingleByteBuff) ByteBuff(org.apache.hadoop.hbase.nio.ByteBuff) Context(io.opentelemetry.context.Context) TextMapGetter(io.opentelemetry.context.propagation.TextMapGetter) MethodDescriptor(org.apache.hbase.thirdparty.com.google.protobuf.Descriptors.MethodDescriptor) Scope(io.opentelemetry.context.Scope) RequestHeader(org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader)

Aggregations

RequestHeader (org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.RequestHeader)13 Test (org.junit.Test)8 Configuration (org.apache.hadoop.conf.Configuration)5 HBaseConfiguration (org.apache.hadoop.hbase.HBaseConfiguration)5 Put (org.apache.hadoop.hbase.client.Put)4 ArrayList (java.util.ArrayList)3 DoNotRetryIOException (org.apache.hadoop.hbase.DoNotRetryIOException)3 RegionInfo (org.apache.hadoop.hbase.client.RegionInfo)2 CellBlockMeta (org.apache.hadoop.hbase.shaded.protobuf.generated.RPCProtos.CellBlockMeta)2 ByteString (org.apache.hbase.thirdparty.com.google.protobuf.ByteString)2 CodedInputStream (org.apache.hbase.thirdparty.com.google.protobuf.CodedInputStream)2 Message (org.apache.hbase.thirdparty.com.google.protobuf.Message)2 ByteBuf (org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf)2 Span (io.opentelemetry.api.trace.Span)1 Context (io.opentelemetry.context.Context)1 Scope (io.opentelemetry.context.Scope)1 TextMapGetter (io.opentelemetry.context.propagation.TextMapGetter)1 BufferedInputStream (java.io.BufferedInputStream)1 DataInputStream (java.io.DataInputStream)1 DataOutputStream (java.io.DataOutputStream)1