Search in sources :

Example 6 with Span

use of com.nike.wingtips.Span in project riposte by Nike-Inc.

the class AsyncHttpClientHelperTest method constructor_clears_out_tracing_and_mdc_info_before_building_underlying_client_and_resets_afterward.

@DataProvider(value = { "true   |   true", "true   |   false", "false  |   true", "false  |   false" }, splitBy = "\\|")
@Test
public void constructor_clears_out_tracing_and_mdc_info_before_building_underlying_client_and_resets_afterward(boolean emptyBeforeCall, boolean explode) {
    AsyncHttpClientConfig config = new AsyncHttpClientConfig.Builder().build();
    AsyncHttpClientConfig.Builder builderMock = mock(AsyncHttpClientConfig.Builder.class);
    List<Span> traceAtTimeOfBuildCall = new ArrayList<>();
    List<Map<String, String>> mdcAtTimeOfBuildCall = new ArrayList<>();
    RuntimeException explodeEx = new RuntimeException("kaboom");
    doAnswer(invocation -> {
        traceAtTimeOfBuildCall.add(Tracer.getInstance().getCurrentSpan());
        mdcAtTimeOfBuildCall.add(MDC.getCopyOfContextMap());
        if (explode)
            throw explodeEx;
        return config;
    }).when(builderMock).build();
    Span spanBeforeCall = (emptyBeforeCall) ? null : Tracer.getInstance().startRequestWithRootSpan("foo");
    Map<String, String> mdcBeforeCall = MDC.getCopyOfContextMap();
    assertThat(Tracer.getInstance().getCurrentSpan()).isEqualTo(spanBeforeCall);
    if (emptyBeforeCall)
        assertThat(mdcBeforeCall).isNull();
    else
        assertThat(mdcBeforeCall).isNotEmpty();
    // when
    Throwable ex = catchThrowable(() -> new AsyncHttpClientHelper(builderMock, true));
    // then
    verify(builderMock).build();
    assertThat(traceAtTimeOfBuildCall).hasSize(1);
    assertThat(traceAtTimeOfBuildCall.get(0)).isNull();
    assertThat(mdcAtTimeOfBuildCall).hasSize(1);
    assertThat(mdcAtTimeOfBuildCall.get(0)).isNull();
    assertThat(Tracer.getInstance().getCurrentSpan()).isEqualTo(spanBeforeCall);
    assertThat(MDC.getCopyOfContextMap()).isEqualTo(mdcBeforeCall);
    if (explode)
        assertThat(ex).isSameAs(explodeEx);
}
Also used : ArrayList(java.util.ArrayList) AsyncHttpClientConfig(com.ning.http.client.AsyncHttpClientConfig) Assertions.catchThrowable(org.assertj.core.api.Assertions.catchThrowable) Matchers.anyString(org.mockito.Matchers.anyString) Span(com.nike.wingtips.Span) Map(java.util.Map) DataProvider(com.tngtech.java.junit.dataprovider.DataProvider) Test(org.junit.Test)

Example 7 with Span

use of com.nike.wingtips.Span in project riposte by Nike-Inc.

the class ChannelPipelineFinalizerHandler method doChannelInactive.

/**
 * This method is used as the final cleanup safety net for when a channel is closed. It guarantees that any
 * {@link ByteBuf}s being held by {@link RequestInfo} or {@link ProxyRouterProcessingState} are {@link
 * ByteBuf#release()}d so that we don't end up with a memory leak.
 *
 * <p>Note that we can't use {@link ChannelOutboundHandler#close(ChannelHandlerContext, ChannelPromise)} for this
 * purpose as it is only called if we close the connection in our application code. It won't be triggered if
 * (for example) the caller closes the connection, and we need it to *always* run for *every* closed connection,
 * no matter the source of the close. {@link ChannelInboundHandler#channelInactive(ChannelHandlerContext)} is always
 * called, so we're using that.
 */
@Override
public PipelineContinuationBehavior doChannelInactive(ChannelHandlerContext ctx) throws Exception {
    try {
        // Grab hold of the things we may need when cleaning up.
        HttpProcessingState httpState = ChannelAttributes.getHttpProcessingStateForChannel(ctx).get();
        ProxyRouterProcessingState proxyRouterState = ChannelAttributes.getProxyRouterProcessingStateForChannel(ctx).get();
        if (httpState == null) {
            if (proxyRouterState == null) {
                logger.debug("This channel closed before it processed any requests. Nothing to cleanup. " + "current_span={}", Tracer.getInstance().getCurrentSpan());
            } else {
                // This should never happen, but if it does we'll try to release what we can and then return.
                logger.error("Found a channel where HttpProcessingState was null, but ProxyRouterProcessingState " + "was not null. This should not be possible! " + "current_span={}", Tracer.getInstance().getCurrentSpan());
                releaseProxyRouterStateResources(proxyRouterState, ctx);
            }
            // With httpState null, there's nothing left for us to do.
            return PipelineContinuationBehavior.CONTINUE;
        }
        RequestInfo<?> requestInfo = httpState.getRequestInfo();
        ResponseInfo<?> responseInfo = httpState.getResponseInfo();
        if (logger.isDebugEnabled()) {
            runnableWithTracingAndMdc(() -> logger.debug("Cleaning up channel after it was closed. closed_channel_id={}", ctx.channel().toString()), ctx).run();
        }
        // The request/response is definitely done at this point since the channel is closing. Set the response end
        // time if it hasn't already been done.
        httpState.setResponseEndTimeNanosToNowIfNotAlreadySet();
        // Handle the case where the response wasn't fully sent or tracing wasn't completed for some reason.
        // We want to finish the distributed tracing span for this request since there's no other place it
        // might be done, and if the request wasn't fully sent then we should spit out a log message so
        // debug investigations can find out what happened.
        @SuppressWarnings("SimplifiableConditionalExpression") boolean tracingAlreadyCompleted = httpState.isTraceCompletedOrScheduled();
        boolean responseNotFullySent = responseInfo == null || !responseInfo.isResponseSendingLastChunkSent();
        if (responseNotFullySent || !tracingAlreadyCompleted) {
            runnableWithTracingAndMdc(() -> {
                if (responseNotFullySent) {
                    logger.warn("The caller's channel was closed before a response could be sent. Distributed tracing " + "will be completed now if it wasn't already done, and we will attempt to output an " + "access log if needed. Any dangling resources will be released. " + "response_info_is_null={}", (responseInfo == null));
                }
                if (!tracingAlreadyCompleted) {
                    httpState.setTraceCompletedOrScheduled(true);
                    Span currentSpan = Tracer.getInstance().getCurrentSpan();
                    if (currentSpan != null && !currentSpan.isCompleted())
                        Tracer.getInstance().completeRequestSpan();
                }
            }, ctx).run();
        }
        // Make sure access logging is handled
        if (!httpState.isAccessLogCompletedOrScheduled() && accessLogger != null) {
            httpState.setAccessLogCompletedOrScheduled(true);
            RequestInfo<?> requestInfoToUse = (requestInfo == null) ? RequestInfoImpl.dummyInstanceForUnknownRequests() : requestInfo;
            accessLogger.log(requestInfoToUse, httpState.getActualResponseObject(), responseInfo, httpState.calculateTotalRequestTimeMillis());
        }
        // Make sure metrics is handled
        handleMetricsForCompletedRequestIfNotAlreadyDone(httpState);
        // Tell the RequestInfo it can release all its resources.
        if (requestInfo != null)
            requestInfo.releaseAllResources();
        releaseProxyRouterStateResources(proxyRouterState, ctx);
    } catch (Throwable t) {
        runnableWithTracingAndMdc(() -> logger.error("An unexpected error occurred during ChannelPipelineFinalizerHandler.doChannelInactive() - this " + "should not happen and indicates a bug that needs to be fixed in Riposte.", t), ctx).run();
    }
    return PipelineContinuationBehavior.CONTINUE;
}
Also used : HttpProcessingState(com.nike.riposte.server.http.HttpProcessingState) ProxyRouterProcessingState(com.nike.riposte.server.http.ProxyRouterProcessingState) Span(com.nike.wingtips.Span)

Example 8 with Span

use of com.nike.wingtips.Span in project riposte by Nike-Inc.

the class AsyncHttpClientHelper method executeAsyncHttpRequest.

/**
 * Executes the given request asynchronously, handling the response with the given responseHandlerFunction, and
 * returns a {@link CompletableFuture} that represents the result of executing the
 * responseHandlerFunction on the downstream response. Any error anywhere along the way will cause the returned
 * future to be completed with {@link CompletableFuture#completeExceptionally(Throwable)}.
 * <p/>
 * NOTE: This is a helper method for calling {@link #executeAsyncHttpRequest(RequestBuilderWrapper,
 * AsyncResponseHandler, java.util.Deque, java.util.Map)} that uses {@link
 * ChannelAttributes#getHttpProcessingStateForChannel(ChannelHandlerContext)} to extract the {@link
 * HttpProcessingState} from the given ctx argument, and then grabs {@link
 * HttpProcessingState#getDistributedTraceStack()} and {@link HttpProcessingState#getLoggerMdcContextMap()} to use
 * as the distributed trace stack and MDC info for the downstream call.
 */
public <O> CompletableFuture<O> executeAsyncHttpRequest(RequestBuilderWrapper requestBuilderWrapper, AsyncResponseHandler<O> responseHandlerFunction, ChannelHandlerContext ctx) {
    HttpProcessingState state = ChannelAttributes.getHttpProcessingStateForChannel(ctx).get();
    if (state == null)
        throw new IllegalStateException("state cannot be null");
    Map<String, String> mdcContextMap = state.getLoggerMdcContextMap();
    Deque<Span> distributedTraceStack = state.getDistributedTraceStack();
    requestBuilderWrapper.setCtx(ctx);
    return executeAsyncHttpRequest(requestBuilderWrapper, responseHandlerFunction, distributedTraceStack, mdcContextMap);
}
Also used : HttpProcessingState(com.nike.riposte.server.http.HttpProcessingState) Span(com.nike.wingtips.Span)

Example 9 with Span

use of com.nike.wingtips.Span in project riposte by Nike-Inc.

the class StreamingAsyncHttpClient method streamDownstreamCall.

/**
 * TODO: Fully document me.
 * <br/>
 * NOTE: The returned CompletableFuture will only be completed successfully if the connection to the downstream
 * server was successful and the initialRequestChunk was successfully written out. This has implications for
 * initialRequestChunk regarding releasing its reference count (i.e. calling {@link
 * io.netty.util.ReferenceCountUtil#release(Object)} and passing in initialRequestChunk). If the returned
 * CompletableFuture is successful it means initialRequestChunk's reference count will already be reduced by one
 * relative to when this method was called because it will have been passed to a successful {@link
 * ChannelHandlerContext#writeAndFlush(Object)} method call.
 * <p/>
 * Long story short - assume initialRequestChunk is an object with a reference count of x:
 * <ul>
 *     <li>
 *         If the returned CompletableFuture is successful, then when it completes successfully
 *         initialRequestChunk's reference count will be x - 1
 *     </li>
 *     <li>
 *         If the returned CompletableFuture is *NOT* successful, then when it completes initialRequestChunk's
 *         reference count will still be x
 *     </li>
 * </ul>
 */
public CompletableFuture<StreamingChannel> streamDownstreamCall(String downstreamHost, int downstreamPort, HttpRequest initialRequestChunk, boolean isSecureHttpsCall, boolean relaxedHttpsValidation, StreamingCallback callback, long downstreamCallTimeoutMillis, boolean performSubSpanAroundDownstreamCalls, boolean addTracingHeadersToDownstreamCall, ChannelHandlerContext ctx) {
    CompletableFuture<StreamingChannel> streamingChannel = new CompletableFuture<>();
    // set host header. include port in value when it is a non-default port
    boolean isDefaultPort = (downstreamPort == 80 && !isSecureHttpsCall) || (downstreamPort == 443 && isSecureHttpsCall);
    String hostHeaderValue = (isDefaultPort) ? downstreamHost : downstreamHost + ":" + downstreamPort;
    initialRequestChunk.headers().set(HttpHeaders.Names.HOST, hostHeaderValue);
    long beforeConnectionStartTimeNanos = System.nanoTime();
    // Create a connection to the downstream server.
    ChannelPool pool = getPooledChannelFuture(downstreamHost, downstreamPort);
    Future<Channel> channelFuture = pool.acquire();
    // Add a listener that kicks off the downstream call once the connection is completed.
    channelFuture.addListener(future -> {
        Pair<Deque<Span>, Map<String, String>> originalThreadInfo = null;
        try {
            long connectionSetupTimeNanos = System.nanoTime() - beforeConnectionStartTimeNanos;
            HttpProcessingState httpProcessingState = ChannelAttributes.getHttpProcessingStateForChannel(ctx).get();
            if (httpProcessingState != null) {
                RequestInfo<?> requestInfo = httpProcessingState.getRequestInfo();
                if (requestInfo != null) {
                    requestInfo.addRequestAttribute(DOWNSTREAM_CALL_CONNECTION_SETUP_TIME_NANOS_REQUEST_ATTR_KEY, connectionSetupTimeNanos);
                }
            }
            // Setup tracing and MDC so our log messages have the correct distributed trace info, etc.
            originalThreadInfo = linkTracingAndMdcToCurrentThread(ctx);
            if (logger.isDebugEnabled()) {
                logger.debug("CONNECTION SETUP TIME NANOS: {}", connectionSetupTimeNanos);
            }
            if (!future.isSuccess()) {
                try {
                    // We did not connect to the downstream host successfully. Notify the callback.
                    streamingChannel.completeExceptionally(new WrapperException("Unable to connect to downstream host: " + downstreamHost, future.cause()));
                } finally {
                    Channel ch = channelFuture.getNow();
                    if (ch != null) {
                        // We likely will never reach here since the channel future was not successful, however if
                        // we *do* manage to get here somehow, then mark the channel broken and release it back
                        // to the pool.
                        markChannelAsBroken(ch);
                        pool.release(ch);
                    }
                }
                return;
            }
            // noinspection ConstantConditions
            if (performSubSpanAroundDownstreamCalls) {
                // Add the subspan.
                String spanName = getSubspanSpanName(initialRequestChunk.getMethod().name(), downstreamHost + ":" + downstreamPort + initialRequestChunk.getUri());
                if (Tracer.getInstance().getCurrentSpan() == null) {
                    // There is no parent span to start a subspan from, so we have to start a new span for this call
                    // rather than a subspan.
                    // TODO: Set this to CLIENT once we have that ability in the wingtips API for request root spans
                    Tracer.getInstance().startRequestWithRootSpan(spanName);
                } else {
                    // There was at least one span on the stack, so we can start a subspan for this call.
                    Tracer.getInstance().startSubSpan(spanName, Span.SpanPurpose.CLIENT);
                }
            }
            Deque<Span> distributedSpanStackToUse = Tracer.getInstance().getCurrentSpanStackCopy();
            Map<String, String> mdcContextToUse = MDC.getCopyOfContextMap();
            Span spanForDownstreamCall = (distributedSpanStackToUse == null) ? null : distributedSpanStackToUse.peek();
            // Add distributed trace headers to the downstream call if desired and we have a current span.
            if (addTracingHeadersToDownstreamCall && spanForDownstreamCall != null) {
                HttpRequestTracingUtils.propagateTracingHeaders((headerKey, headerValue) -> {
                    if (headerValue != null) {
                        initialRequestChunk.headers().set(headerKey, headerValue);
                    }
                }, spanForDownstreamCall);
            }
            Channel ch = channelFuture.getNow();
            if (logger.isDebugEnabled())
                logger.debug("Channel ID of the Channel pulled from the pool: {}", ch.toString());
            // We may not be in the right thread to modify the channel pipeline and write data. If we're in the
            // wrong thread we can get deadlock type situations. By running the relevant bits in the channel's
            // event loop we're guaranteed it will be run in the correct thread.
            ch.eventLoop().execute(runnableWithTracingAndMdc(() -> {
                BiConsumer<String, Throwable> prepChannelErrorHandler = (errorMessage, cause) -> {
                    try {
                        streamingChannel.completeExceptionally(new WrapperException(errorMessage, cause));
                    } finally {
                        // This channel may be permanently busted depending on the error, so mark it broken and let
                        // the pool close it and clean it up.
                        markChannelAsBroken(ch);
                        pool.release(ch);
                    }
                };
                try {
                    ObjectHolder<Boolean> callActiveHolder = new ObjectHolder<>();
                    callActiveHolder.heldObject = true;
                    ObjectHolder<Boolean> lastChunkSentDownstreamHolder = new ObjectHolder<>();
                    lastChunkSentDownstreamHolder.heldObject = false;
                    // noinspection ConstantConditions
                    prepChannelForDownstreamCall(pool, ch, callback, distributedSpanStackToUse, mdcContextToUse, isSecureHttpsCall, relaxedHttpsValidation, performSubSpanAroundDownstreamCalls, downstreamCallTimeoutMillis, callActiveHolder, lastChunkSentDownstreamHolder);
                    logInitialRequestChunk(initialRequestChunk, downstreamHost, downstreamPort);
                    // Send the HTTP request.
                    ChannelFuture writeFuture = ch.writeAndFlush(initialRequestChunk);
                    // After the initial chunk has been sent we'll open the floodgates
                    // for any further chunk streaming
                    writeFuture.addListener(completedWriteFuture -> {
                        if (completedWriteFuture.isSuccess())
                            streamingChannel.complete(new StreamingChannel(ch, pool, callActiveHolder, lastChunkSentDownstreamHolder, distributedSpanStackToUse, mdcContextToUse));
                        else {
                            prepChannelErrorHandler.accept("Writing the first HttpRequest chunk to the downstream service failed.", completedWriteFuture.cause());
                            // noinspection UnnecessaryReturnStatement
                            return;
                        }
                    });
                } catch (SSLException | NoSuchAlgorithmException | KeyStoreException ex) {
                    prepChannelErrorHandler.accept("Error setting up SSL context for downstream call", ex);
                    // noinspection UnnecessaryReturnStatement
                    return;
                } catch (Throwable t) {
                    // If we don't catch and handle this here it gets swallowed since we're in a Runnable
                    prepChannelErrorHandler.accept("An unexpected error occurred while prepping the channel pipeline for the downstream call", t);
                    // noinspection UnnecessaryReturnStatement
                    return;
                }
            }, ctx));
        } catch (Throwable ex) {
            try {
                String errorMsg = "Error occurred attempting to send first chunk (headers/etc) downstream";
                Exception errorToFire = new WrapperException(errorMsg, ex);
                logger.warn(errorMsg, errorToFire);
                streamingChannel.completeExceptionally(errorToFire);
            } finally {
                Channel ch = channelFuture.getNow();
                if (ch != null) {
                    // Depending on where the error was thrown the channel may or may not exist. If it does exist,
                    // then assume it's unusable, mark it as broken, and let the pool close it and remove it.
                    markChannelAsBroken(ch);
                    pool.release(ch);
                }
            }
        } finally {
            // Unhook the tracing and MDC stuff from this thread now that we're done.
            unlinkTracingAndMdcFromCurrentThread(originalThreadInfo);
        }
    });
    return streamingChannel;
}
Also used : AttributeKey(io.netty.util.AttributeKey) Span(com.nike.wingtips.Span) HttpHeaders(io.netty.handler.codec.http.HttpHeaders) DefaultThreadFactory(io.netty.util.concurrent.DefaultThreadFactory) ChannelInboundHandlerAdapter(io.netty.channel.ChannelInboundHandlerAdapter) LoggerFactory(org.slf4j.LoggerFactory) Random(java.util.Random) KeyStoreException(java.security.KeyStoreException) AsyncNettyHelper.unlinkTracingAndMdcFromCurrentThread(com.nike.riposte.util.AsyncNettyHelper.unlinkTracingAndMdcFromCurrentThread) HttpObject(io.netty.handler.codec.http.HttpObject) HttpClientCodec(io.netty.handler.codec.http.HttpClientCodec) InetAddress(java.net.InetAddress) ChannelPromise(io.netty.channel.ChannelPromise) Map(java.util.Map) ThreadFactory(java.util.concurrent.ThreadFactory) SocketChannel(io.netty.channel.socket.SocketChannel) HttpObjectDecoder(io.netty.handler.codec.http.HttpObjectDecoder) HttpRequest(io.netty.handler.codec.http.HttpRequest) TrustManagerFactory(javax.net.ssl.TrustManagerFactory) DOWNSTREAM_CALL_CONNECTION_SETUP_TIME_NANOS_REQUEST_ATTR_KEY(com.nike.riposte.server.handler.ProxyRouterEndpointExecutionHandler.DOWNSTREAM_CALL_CONNECTION_SETUP_TIME_NANOS_REQUEST_ATTR_KEY) DownstreamIdleChannelTimeoutException(com.nike.riposte.server.error.exception.DownstreamIdleChannelTimeoutException) ChannelHealthChecker(io.netty.channel.pool.ChannelHealthChecker) DownstreamChannelClosedUnexpectedlyException(com.nike.riposte.server.error.exception.DownstreamChannelClosedUnexpectedlyException) KeyStore(java.security.KeyStore) ChannelPipeline(io.netty.channel.ChannelPipeline) InetSocketAddress(java.net.InetSocketAddress) NioEventLoopGroup(io.netty.channel.nio.NioEventLoopGroup) List(java.util.List) SSLException(javax.net.ssl.SSLException) AbstractChannelPoolHandler(io.netty.channel.pool.AbstractChannelPoolHandler) LogLevel(io.netty.handler.logging.LogLevel) ChannelAttributes(com.nike.riposte.server.channelpipeline.ChannelAttributes) DefaultHttpResponse(io.netty.handler.codec.http.DefaultHttpResponse) NoSuchAlgorithmException(java.security.NoSuchAlgorithmException) HttpObjectEncoder(io.netty.handler.codec.http.HttpObjectEncoder) DefaultFullHttpResponse(io.netty.handler.codec.http.DefaultFullHttpResponse) HttpResponse(io.netty.handler.codec.http.HttpResponse) ChannelPoolMap(io.netty.channel.pool.ChannelPoolMap) HttpProcessingState(com.nike.riposte.server.http.HttpProcessingState) NioSocketChannel(io.netty.channel.socket.nio.NioSocketChannel) HttpRequestEncoder(io.netty.handler.codec.http.HttpRequestEncoder) DownstreamIdleChannelTimeoutHandler(com.nike.riposte.client.asynchttp.netty.downstreampipeline.DownstreamIdleChannelTimeoutHandler) RequestInfo(com.nike.riposte.server.http.RequestInfo) ChannelOption(io.netty.channel.ChannelOption) LoggingHandler(io.netty.handler.logging.LoggingHandler) Tracer(com.nike.wingtips.Tracer) CompletableFuture(java.util.concurrent.CompletableFuture) Errors(io.netty.channel.unix.Errors) Deque(java.util.Deque) LastHttpContent(io.netty.handler.codec.http.LastHttpContent) EpollSocketChannel(io.netty.channel.epoll.EpollSocketChannel) AsyncNettyHelper.linkTracingAndMdcToCurrentThread(com.nike.riposte.util.AsyncNettyHelper.linkTracingAndMdcToCurrentThread) ChannelHandlerContext(io.netty.channel.ChannelHandlerContext) InsecureTrustManagerFactory(io.netty.handler.ssl.util.InsecureTrustManagerFactory) HttpRequestTracingUtils(com.nike.wingtips.http.HttpRequestTracingUtils) BiConsumer(java.util.function.BiConsumer) EpollEventLoopGroup(io.netty.channel.epoll.EpollEventLoopGroup) HttpContent(io.netty.handler.codec.http.HttpContent) Attribute(io.netty.util.Attribute) Logger(org.slf4j.Logger) EventLoopGroup(io.netty.channel.EventLoopGroup) CombinedChannelDuplexHandler(io.netty.channel.CombinedChannelDuplexHandler) SslContext(io.netty.handler.ssl.SslContext) Promise(io.netty.util.concurrent.Promise) HostnameResolutionException(com.nike.riposte.server.error.exception.HostnameResolutionException) Field(java.lang.reflect.Field) UnknownHostException(java.net.UnknownHostException) ChannelFuture(io.netty.channel.ChannelFuture) Epoll(io.netty.channel.epoll.Epoll) Consumer(java.util.function.Consumer) Channel(io.netty.channel.Channel) SimpleChannelPool(io.netty.channel.pool.SimpleChannelPool) Bootstrap(io.netty.bootstrap.Bootstrap) FullHttpResponse(io.netty.handler.codec.http.FullHttpResponse) WrapperException(com.nike.backstopper.exception.WrapperException) MDC(org.slf4j.MDC) SimpleChannelInboundHandler(io.netty.channel.SimpleChannelInboundHandler) NativeIoExceptionWrapper(com.nike.riposte.server.error.exception.NativeIoExceptionWrapper) AsyncNettyHelper.runnableWithTracingAndMdc(com.nike.riposte.util.AsyncNettyHelper.runnableWithTracingAndMdc) ChannelPool(io.netty.channel.pool.ChannelPool) SslContextBuilder(io.netty.handler.ssl.SslContextBuilder) ChannelHandler(io.netty.channel.ChannelHandler) Pair(com.nike.internal.util.Pair) AbstractChannelPoolMap(io.netty.channel.pool.AbstractChannelPoolMap) Future(io.netty.util.concurrent.Future) HttpProcessingState(com.nike.riposte.server.http.HttpProcessingState) Span(com.nike.wingtips.Span) WrapperException(com.nike.backstopper.exception.WrapperException) CompletableFuture(java.util.concurrent.CompletableFuture) ChannelFuture(io.netty.channel.ChannelFuture) SimpleChannelPool(io.netty.channel.pool.SimpleChannelPool) ChannelPool(io.netty.channel.pool.ChannelPool) SocketChannel(io.netty.channel.socket.SocketChannel) NioSocketChannel(io.netty.channel.socket.nio.NioSocketChannel) EpollSocketChannel(io.netty.channel.epoll.EpollSocketChannel) Channel(io.netty.channel.Channel) Deque(java.util.Deque) KeyStoreException(java.security.KeyStoreException) DownstreamIdleChannelTimeoutException(com.nike.riposte.server.error.exception.DownstreamIdleChannelTimeoutException) DownstreamChannelClosedUnexpectedlyException(com.nike.riposte.server.error.exception.DownstreamChannelClosedUnexpectedlyException) SSLException(javax.net.ssl.SSLException) NoSuchAlgorithmException(java.security.NoSuchAlgorithmException) HostnameResolutionException(com.nike.riposte.server.error.exception.HostnameResolutionException) UnknownHostException(java.net.UnknownHostException) WrapperException(com.nike.backstopper.exception.WrapperException) Map(java.util.Map) ChannelPoolMap(io.netty.channel.pool.ChannelPoolMap) AbstractChannelPoolMap(io.netty.channel.pool.AbstractChannelPoolMap) BiConsumer(java.util.function.BiConsumer)

Example 10 with Span

use of com.nike.wingtips.Span in project riposte by Nike-Inc.

the class StreamingAsyncHttpClient method prepChannelForDownstreamCall.

protected void prepChannelForDownstreamCall(ChannelPool pool, Channel ch, StreamingCallback callback, Deque<Span> distributedSpanStackToUse, Map<String, String> mdcContextToUse, boolean isSecureHttpsCall, boolean relaxedHttpsValidation, boolean performSubSpanAroundDownstreamCalls, long downstreamCallTimeoutMillis, ObjectHolder<Boolean> callActiveHolder, ObjectHolder<Boolean> lastChunkSentDownstreamHolder) throws SSLException, NoSuchAlgorithmException, KeyStoreException {
    ChannelHandler chunkSenderHandler = new SimpleChannelInboundHandler<HttpObject>() {

        @Override
        protected void channelRead0(ChannelHandlerContext downstreamCallCtx, HttpObject msg) throws Exception {
            try {
                // the call is fully processed should not trigger the behavior a second time.
                if (callActiveHolder.heldObject) {
                    if (msg instanceof LastHttpContent) {
                        lastChunkSentDownstreamHolder.heldObject = true;
                        if (performSubSpanAroundDownstreamCalls) {
                            // Complete the subspan.
                            runnableWithTracingAndMdc(() -> {
                                if (distributedSpanStackToUse == null || distributedSpanStackToUse.size() < 2)
                                    Tracer.getInstance().completeRequestSpan();
                                else
                                    Tracer.getInstance().completeSubSpan();
                            }, distributedSpanStackToUse, mdcContextToUse).run();
                        }
                    }
                    HttpObject msgToPass = msg;
                    if (msg instanceof HttpResponse) {
                        // We can't pass the original HttpResponse back to the callback due to intricacies of how
                        // Netty handles determining the last chunk. If we do, and the callback ends up writing
                        // the message out to the client (which happens during proxy routing for example), then
                        // msg's headers might get modified - potentially causing this channel pipeline to
                        // never send a LastHttpContent, which will in turn cause an indefinite hang.
                        HttpResponse origHttpResponse = (HttpResponse) msg;
                        HttpResponse httpResponse = (msg instanceof FullHttpResponse) ? new DefaultFullHttpResponse(origHttpResponse.getProtocolVersion(), origHttpResponse.getStatus(), ((FullHttpResponse) msg).content()) : new DefaultHttpResponse(origHttpResponse.getProtocolVersion(), origHttpResponse.getStatus());
                        httpResponse.headers().add(origHttpResponse.headers());
                        msgToPass = httpResponse;
                    }
                    callback.messageReceived(msgToPass);
                } else {
                    if (shouldLogBadMessagesAfterRequestFinishes) {
                        runnableWithTracingAndMdc(() -> logger.warn("Received HttpObject msg when call was not active: {}", String.valueOf(msg)), distributedSpanStackToUse, mdcContextToUse).run();
                    }
                }
            } finally {
                if (msg instanceof LastHttpContent) {
                    releaseChannelBackToPoolIfCallIsActive(ch, pool, callActiveHolder, "last content chunk sent", distributedSpanStackToUse, mdcContextToUse);
                }
            }
        }
    };
    Consumer<Throwable> doErrorHandlingConsumer = (cause) -> {
        Pair<Deque<Span>, Map<String, String>> originalThreadInfo = null;
        try {
            // Setup tracing and MDC so our log messages have the correct distributed trace info, etc.
            originalThreadInfo = linkTracingAndMdcToCurrentThread(distributedSpanStackToUse, mdcContextToUse);
            // call is fully processed should not trigger the behavior a second time.
            if (callActiveHolder.heldObject) {
                if (performSubSpanAroundDownstreamCalls) {
                    if (distributedSpanStackToUse == null || distributedSpanStackToUse.size() < 2)
                        Tracer.getInstance().completeRequestSpan();
                    else
                        Tracer.getInstance().completeSubSpan();
                }
                Tracer.getInstance().unregisterFromThread();
                if (cause instanceof Errors.NativeIoException) {
                    // NativeIoExceptions are often setup to not have stack traces which is bad for debugging.
                    // Wrap it in a NativeIoExceptionWrapper that maps to a 503 since this is likely a busted
                    // connection and a second attempt should work.
                    cause = new NativeIoExceptionWrapper("Caught a NativeIoException in the downstream streaming call pipeline. Wrapped it in a " + "NativeIoExceptionWrapper so that it maps to a 503 and provides a usable stack trace " + "in the logs.", (Errors.NativeIoException) cause);
                }
                callback.unrecoverableErrorOccurred(cause, true);
            } else {
                if (cause instanceof DownstreamIdleChannelTimeoutException) {
                    logger.debug("A channel used for downstream calls will be closed because it was idle too long. " + "This is normal behavior and does not indicate a downstream call failure: {}", cause.toString());
                } else {
                    logger.warn("Received exception in downstream call pipeline after the call was finished. " + "Not necessarily anything to worry about but in case it helps debugging the " + "exception was: {}", cause.toString());
                }
            }
        } finally {
            // Mark the channel as broken so it will be closed and removed from the pool when it is returned.
            markChannelAsBroken(ch);
            // Release it back to the pool if possible/necessary so the pool can do its usual cleanup.
            releaseChannelBackToPoolIfCallIsActive(ch, pool, callActiveHolder, "error received in downstream pipeline: " + cause.toString(), distributedSpanStackToUse, mdcContextToUse);
            // No matter what the cause is we want to make sure the channel is closed. Doing this raw ch.close()
            // here will catch the cases where this channel does not have an active call but still needs to be
            // closed (e.g. an idle channel timeout that happens in-between calls).
            ch.close();
            // Unhook the tracing and MDC stuff from this thread now that we're done.
            unlinkTracingAndMdcFromCurrentThread(originalThreadInfo);
        }
    };
    ChannelHandler errorHandler = new ChannelInboundHandlerAdapter() {

        @Override
        public void exceptionCaught(ChannelHandlerContext downstreamCallCtx, Throwable cause) throws Exception {
            doErrorHandlingConsumer.accept(cause);
        }

        @Override
        public void channelInactive(ChannelHandlerContext ctx) throws Exception {
            if (logger.isDebugEnabled()) {
                runnableWithTracingAndMdc(() -> logger.debug("Downstream channel closing. call_active={}, last_chunk_sent_downstream={}, channel_id={}", callActiveHolder.heldObject, lastChunkSentDownstreamHolder.heldObject, ctx.channel().toString()), distributedSpanStackToUse, mdcContextToUse).run();
            }
            // We only care if the channel was closed while the call was active.
            if (callActiveHolder.heldObject)
                doErrorHandlingConsumer.accept(new DownstreamChannelClosedUnexpectedlyException(ch));
            super.channelInactive(ctx);
        }
    };
    // Set up the HTTP client pipeline.
    ChannelPipeline p = ch.pipeline();
    List<String> registeredHandlerNames = p.names();
    // couldn't be removed at that time because it wasn't in the channel's eventLoop.
    if (registeredHandlerNames.contains(DOWNSTREAM_IDLE_CHANNEL_TIMEOUT_HANDLER_NAME)) {
        ChannelHandler idleHandler = p.get(DOWNSTREAM_IDLE_CHANNEL_TIMEOUT_HANDLER_NAME);
        if (idleHandler != null)
            p.remove(idleHandler);
    }
    if (debugChannelLifecycleLoggingEnabled && !registeredHandlerNames.contains(DEBUG_LOGGER_HANDLER_NAME)) {
        // Add the channel debug logger if desired.
        p.addFirst(DEBUG_LOGGER_HANDLER_NAME, new LoggingHandler(DOWNSTREAM_CLIENT_CHANNEL_DEBUG_LOGGER_NAME, LogLevel.DEBUG));
    }
    // Add/replace a downstream call timeout detector.
    addOrReplacePipelineHandler(new DownstreamIdleChannelTimeoutHandler(downstreamCallTimeoutMillis, () -> callActiveHolder.heldObject, true, "StreamingAsyncHttpClientChannel-call-timeout", distributedSpanStackToUse, mdcContextToUse), DOWNSTREAM_CALL_TIMEOUT_HANDLER_NAME, p, registeredHandlerNames);
    if (isSecureHttpsCall) {
        // SSL call. Make sure we add the SSL handler if necessary.
        if (!registeredHandlerNames.contains(SSL_HANDLER_NAME)) {
            if (clientSslCtx == null) {
                if (relaxedHttpsValidation) {
                    clientSslCtx = SslContextBuilder.forClient().trustManager(InsecureTrustManagerFactory.INSTANCE).build();
                } else {
                    TrustManagerFactory tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm());
                    tmf.init((KeyStore) null);
                    clientSslCtx = SslContextBuilder.forClient().trustManager(tmf).build();
                }
            }
            p.addAfter(DOWNSTREAM_CALL_TIMEOUT_HANDLER_NAME, SSL_HANDLER_NAME, clientSslCtx.newHandler(ch.alloc()));
        }
    } else {
        // Not an SSL call. Remove the SSL handler if it's there.
        if (registeredHandlerNames.contains(SSL_HANDLER_NAME))
            p.remove(SSL_HANDLER_NAME);
    }
    // The HttpClientCodec handler deals with HTTP codec stuff so you don't have to. Set it up if it hasn't already
    // been setup, and inspect it to make sure it's in a "ready to handle a new request" state. Some rare
    // and currently unknown edgecases can cause us to hit this point with the HttpClientCodec in an unclean
    // state, and if we barrel forward without cleaning this up the call will fail.
    boolean pipelineContainsHttpClientCodec = registeredHandlerNames.contains(HTTP_CLIENT_CODEC_HANDLER_NAME);
    boolean existingHttpClientCodecIsInBadState = false;
    if (pipelineContainsHttpClientCodec) {
        HttpClientCodec currentCodec = (HttpClientCodec) p.get(HTTP_CLIENT_CODEC_HANDLER_NAME);
        int currentHttpClientCodecInboundState = determineHttpClientCodecInboundState(currentCodec);
        if (currentHttpClientCodecInboundState != 0) {
            runnableWithTracingAndMdc(() -> logger.warn("HttpClientCodec inbound state was not 0. It will be replaced with a fresh HttpClientCodec. " + "bad_httpclientcodec_inbound_state={}", currentHttpClientCodecInboundState), distributedSpanStackToUse, mdcContextToUse).run();
            existingHttpClientCodecIsInBadState = true;
        } else {
            int currentHttpClientCodecOutboundState = determineHttpClientCodecOutboundState(currentCodec);
            if (currentHttpClientCodecOutboundState != 0) {
                runnableWithTracingAndMdc(() -> logger.warn("HttpClientCodec outbound state was not 0. It will be replaced with a fresh HttpClientCodec. " + "bad_httpclientcodec_outbound_state={}", currentHttpClientCodecOutboundState), distributedSpanStackToUse, mdcContextToUse).run();
                existingHttpClientCodecIsInBadState = true;
            }
        }
    }
    // or replace it if it was in a bad state.
    if (!pipelineContainsHttpClientCodec || existingHttpClientCodecIsInBadState) {
        addOrReplacePipelineHandler(new HttpClientCodec(4096, 8192, 8192, true), HTTP_CLIENT_CODEC_HANDLER_NAME, p, registeredHandlerNames);
    }
    // Update the chunk sender handler and error handler to the newly created versions that know about the correct
    // callback, dtrace info, etc to use for this request.
    addOrReplacePipelineHandler(chunkSenderHandler, CHUNK_SENDER_HANDLER_NAME, p, registeredHandlerNames);
    addOrReplacePipelineHandler(errorHandler, ERROR_HANDLER_NAME, p, registeredHandlerNames);
}
Also used : AttributeKey(io.netty.util.AttributeKey) Span(com.nike.wingtips.Span) HttpHeaders(io.netty.handler.codec.http.HttpHeaders) DefaultThreadFactory(io.netty.util.concurrent.DefaultThreadFactory) ChannelInboundHandlerAdapter(io.netty.channel.ChannelInboundHandlerAdapter) LoggerFactory(org.slf4j.LoggerFactory) Random(java.util.Random) KeyStoreException(java.security.KeyStoreException) AsyncNettyHelper.unlinkTracingAndMdcFromCurrentThread(com.nike.riposte.util.AsyncNettyHelper.unlinkTracingAndMdcFromCurrentThread) HttpObject(io.netty.handler.codec.http.HttpObject) HttpClientCodec(io.netty.handler.codec.http.HttpClientCodec) InetAddress(java.net.InetAddress) ChannelPromise(io.netty.channel.ChannelPromise) Map(java.util.Map) ThreadFactory(java.util.concurrent.ThreadFactory) SocketChannel(io.netty.channel.socket.SocketChannel) HttpObjectDecoder(io.netty.handler.codec.http.HttpObjectDecoder) HttpRequest(io.netty.handler.codec.http.HttpRequest) TrustManagerFactory(javax.net.ssl.TrustManagerFactory) DOWNSTREAM_CALL_CONNECTION_SETUP_TIME_NANOS_REQUEST_ATTR_KEY(com.nike.riposte.server.handler.ProxyRouterEndpointExecutionHandler.DOWNSTREAM_CALL_CONNECTION_SETUP_TIME_NANOS_REQUEST_ATTR_KEY) DownstreamIdleChannelTimeoutException(com.nike.riposte.server.error.exception.DownstreamIdleChannelTimeoutException) ChannelHealthChecker(io.netty.channel.pool.ChannelHealthChecker) DownstreamChannelClosedUnexpectedlyException(com.nike.riposte.server.error.exception.DownstreamChannelClosedUnexpectedlyException) KeyStore(java.security.KeyStore) ChannelPipeline(io.netty.channel.ChannelPipeline) InetSocketAddress(java.net.InetSocketAddress) NioEventLoopGroup(io.netty.channel.nio.NioEventLoopGroup) List(java.util.List) SSLException(javax.net.ssl.SSLException) AbstractChannelPoolHandler(io.netty.channel.pool.AbstractChannelPoolHandler) LogLevel(io.netty.handler.logging.LogLevel) ChannelAttributes(com.nike.riposte.server.channelpipeline.ChannelAttributes) DefaultHttpResponse(io.netty.handler.codec.http.DefaultHttpResponse) NoSuchAlgorithmException(java.security.NoSuchAlgorithmException) HttpObjectEncoder(io.netty.handler.codec.http.HttpObjectEncoder) DefaultFullHttpResponse(io.netty.handler.codec.http.DefaultFullHttpResponse) HttpResponse(io.netty.handler.codec.http.HttpResponse) ChannelPoolMap(io.netty.channel.pool.ChannelPoolMap) HttpProcessingState(com.nike.riposte.server.http.HttpProcessingState) NioSocketChannel(io.netty.channel.socket.nio.NioSocketChannel) HttpRequestEncoder(io.netty.handler.codec.http.HttpRequestEncoder) DownstreamIdleChannelTimeoutHandler(com.nike.riposte.client.asynchttp.netty.downstreampipeline.DownstreamIdleChannelTimeoutHandler) RequestInfo(com.nike.riposte.server.http.RequestInfo) ChannelOption(io.netty.channel.ChannelOption) LoggingHandler(io.netty.handler.logging.LoggingHandler) Tracer(com.nike.wingtips.Tracer) CompletableFuture(java.util.concurrent.CompletableFuture) Errors(io.netty.channel.unix.Errors) Deque(java.util.Deque) LastHttpContent(io.netty.handler.codec.http.LastHttpContent) EpollSocketChannel(io.netty.channel.epoll.EpollSocketChannel) AsyncNettyHelper.linkTracingAndMdcToCurrentThread(com.nike.riposte.util.AsyncNettyHelper.linkTracingAndMdcToCurrentThread) ChannelHandlerContext(io.netty.channel.ChannelHandlerContext) InsecureTrustManagerFactory(io.netty.handler.ssl.util.InsecureTrustManagerFactory) HttpRequestTracingUtils(com.nike.wingtips.http.HttpRequestTracingUtils) BiConsumer(java.util.function.BiConsumer) EpollEventLoopGroup(io.netty.channel.epoll.EpollEventLoopGroup) HttpContent(io.netty.handler.codec.http.HttpContent) Attribute(io.netty.util.Attribute) Logger(org.slf4j.Logger) EventLoopGroup(io.netty.channel.EventLoopGroup) CombinedChannelDuplexHandler(io.netty.channel.CombinedChannelDuplexHandler) SslContext(io.netty.handler.ssl.SslContext) Promise(io.netty.util.concurrent.Promise) HostnameResolutionException(com.nike.riposte.server.error.exception.HostnameResolutionException) Field(java.lang.reflect.Field) UnknownHostException(java.net.UnknownHostException) ChannelFuture(io.netty.channel.ChannelFuture) Epoll(io.netty.channel.epoll.Epoll) Consumer(java.util.function.Consumer) Channel(io.netty.channel.Channel) SimpleChannelPool(io.netty.channel.pool.SimpleChannelPool) Bootstrap(io.netty.bootstrap.Bootstrap) FullHttpResponse(io.netty.handler.codec.http.FullHttpResponse) WrapperException(com.nike.backstopper.exception.WrapperException) MDC(org.slf4j.MDC) SimpleChannelInboundHandler(io.netty.channel.SimpleChannelInboundHandler) NativeIoExceptionWrapper(com.nike.riposte.server.error.exception.NativeIoExceptionWrapper) AsyncNettyHelper.runnableWithTracingAndMdc(com.nike.riposte.util.AsyncNettyHelper.runnableWithTracingAndMdc) ChannelPool(io.netty.channel.pool.ChannelPool) SslContextBuilder(io.netty.handler.ssl.SslContextBuilder) ChannelHandler(io.netty.channel.ChannelHandler) Pair(com.nike.internal.util.Pair) AbstractChannelPoolMap(io.netty.channel.pool.AbstractChannelPoolMap) Future(io.netty.util.concurrent.Future) DownstreamIdleChannelTimeoutHandler(com.nike.riposte.client.asynchttp.netty.downstreampipeline.DownstreamIdleChannelTimeoutHandler) LoggingHandler(io.netty.handler.logging.LoggingHandler) ChannelHandlerContext(io.netty.channel.ChannelHandlerContext) ChannelHandler(io.netty.channel.ChannelHandler) HttpClientCodec(io.netty.handler.codec.http.HttpClientCodec) Span(com.nike.wingtips.Span) HttpObject(io.netty.handler.codec.http.HttpObject) DefaultFullHttpResponse(io.netty.handler.codec.http.DefaultFullHttpResponse) FullHttpResponse(io.netty.handler.codec.http.FullHttpResponse) Pair(com.nike.internal.util.Pair) SimpleChannelInboundHandler(io.netty.channel.SimpleChannelInboundHandler) DefaultFullHttpResponse(io.netty.handler.codec.http.DefaultFullHttpResponse) NativeIoExceptionWrapper(com.nike.riposte.server.error.exception.NativeIoExceptionWrapper) DefaultHttpResponse(io.netty.handler.codec.http.DefaultHttpResponse) DefaultFullHttpResponse(io.netty.handler.codec.http.DefaultFullHttpResponse) HttpResponse(io.netty.handler.codec.http.HttpResponse) FullHttpResponse(io.netty.handler.codec.http.FullHttpResponse) LastHttpContent(io.netty.handler.codec.http.LastHttpContent) ChannelPipeline(io.netty.channel.ChannelPipeline) Errors(io.netty.channel.unix.Errors) DefaultHttpResponse(io.netty.handler.codec.http.DefaultHttpResponse) TrustManagerFactory(javax.net.ssl.TrustManagerFactory) InsecureTrustManagerFactory(io.netty.handler.ssl.util.InsecureTrustManagerFactory) DownstreamChannelClosedUnexpectedlyException(com.nike.riposte.server.error.exception.DownstreamChannelClosedUnexpectedlyException) DownstreamIdleChannelTimeoutException(com.nike.riposte.server.error.exception.DownstreamIdleChannelTimeoutException) ChannelInboundHandlerAdapter(io.netty.channel.ChannelInboundHandlerAdapter)

Aggregations

Span (com.nike.wingtips.Span)46 Test (org.junit.Test)38 DataProvider (com.tngtech.java.junit.dataprovider.DataProvider)21 Map (java.util.Map)14 Deque (java.util.Deque)13 Assertions.catchThrowable (org.assertj.core.api.Assertions.catchThrowable)9 CompletableFuture (java.util.concurrent.CompletableFuture)7 HashMap (java.util.HashMap)6 HttpProcessingState (com.nike.riposte.server.http.HttpProcessingState)5 Tracer (com.nike.wingtips.Tracer)4 ChannelFuture (io.netty.channel.ChannelFuture)4 LinkedList (java.util.LinkedList)4 CircuitBreaker (com.nike.fastbreak.CircuitBreaker)3 PipelineContinuationBehavior (com.nike.riposte.server.handler.base.PipelineContinuationBehavior)3 RequestInfo (com.nike.riposte.server.http.RequestInfo)3 ChannelHandlerContext (io.netty.channel.ChannelHandlerContext)3 Matchers.anyString (org.mockito.Matchers.anyString)3 WrapperException (com.nike.backstopper.exception.WrapperException)2 Pair (com.nike.internal.util.Pair)2 DownstreamIdleChannelTimeoutHandler (com.nike.riposte.client.asynchttp.netty.downstreampipeline.DownstreamIdleChannelTimeoutHandler)2