use of io.netty.channel.pool.SimpleChannelPool in project riposte by Nike-Inc.
the class StreamingAsyncHttpClient method getPoolMap.
protected ChannelPoolMap<InetSocketAddress, SimpleChannelPool> getPoolMap() {
ChannelPoolMap<InetSocketAddress, SimpleChannelPool> result = poolMap;
if (poolMap == null) {
/*
This method gets called for every downstream call, so we don't want to synchronize the whole method. But
it's easy for multiple threads to get here at the same time when the server starts up, so we need *some*
kind of protection around the creation of poolMap, hence the elaborate (but correct) double-checked
locking. Since poolMap is volatile this works, and the local variable "result" helps with speed during
the normal case where poolMap has already been initialized.
See https://en.wikipedia.org/wiki/Double-checked_locking
*/
synchronized (this) {
result = poolMap;
if (result == null) {
EventLoopGroup eventLoopGroup;
Class<? extends SocketChannel> channelClass;
if (Epoll.isAvailable()) {
logger.info("Creating channel pool. The epoll native transport is available. Using epoll instead of " + "NIO. proxy_router_using_native_epoll_transport=true");
eventLoopGroup = new EpollEventLoopGroup(0, createProxyRouterThreadFactory());
channelClass = EpollSocketChannel.class;
} else {
logger.info("Creating channel pool. The epoll native transport is NOT available or you are not running " + "on a compatible OS/architecture. Using NIO. " + "proxy_router_using_native_epoll_transport=false");
eventLoopGroup = new NioEventLoopGroup(0, createProxyRouterThreadFactory());
channelClass = NioSocketChannel.class;
}
result = new AbstractChannelPoolMap<InetSocketAddress, SimpleChannelPool>() {
@Override
protected SimpleChannelPool newPool(InetSocketAddress key) {
return new SimpleChannelPool(generateClientBootstrap(eventLoopGroup, channelClass).remoteAddress(key), new ChannelPoolHandlerImpl(), CHANNEL_HEALTH_CHECK_INSTANCE) {
@Override
public Future<Void> release(Channel channel, Promise<Void> promise) {
markChannelBrokenAndLogInfoIfHttpClientCodecStateIsNotZero(channel, "Releasing channel back to pool");
return super.release(channel, promise);
}
@Override
protected Channel pollChannel() {
Channel channel = super.pollChannel();
if (channel != null) {
markChannelBrokenAndLogInfoIfHttpClientCodecStateIsNotZero(channel, "Polling channel to be reused before healthcheck");
if (idleChannelTimeoutMillis > 0) {
/*
We have a channel that is about to be re-used, so disable the idle channel
timeout detector if it exists. By disabling it here we make sure that it is
effectively "gone" before the healthcheck happens, preventing race
conditions. Note that we can't call pipeline.remove() here because we may
not be in the pipeline's event loop, so calling pipeline.remove() could
lead to thread deadlock, but we can't call channel.eventLoop().execute()
because we need it disabled *now* before the healthcheck happens. The
pipeline preparation phase will remove it safely soon, and in the meantime
it will be disabled.
*/
ChannelPipeline pipeline = channel.pipeline();
ChannelHandler idleHandler = pipeline.get(DOWNSTREAM_IDLE_CHANNEL_TIMEOUT_HANDLER_NAME);
if (idleHandler != null) {
((DownstreamIdleChannelTimeoutHandler) idleHandler).disableTimeoutHandling();
}
}
}
return channel;
}
@Override
protected boolean offerChannel(Channel channel) {
if (idleChannelTimeoutMillis > 0) {
// Add an idle channel timeout detector. This will be removed before the
// channel's reacquisition healthcheck runs (in pollChannel()), so we won't
// have a race condition where this channel is handed over for use but gets
// squashed right before it's about to be used.
// NOTE: Due to the semantics of pool.release() we're guaranteed to be in the
// channel's event loop, so there's no chance of a thread deadlock when
// messing with the pipeline.
channel.pipeline().addFirst(DOWNSTREAM_IDLE_CHANNEL_TIMEOUT_HANDLER_NAME, new DownstreamIdleChannelTimeoutHandler(idleChannelTimeoutMillis, () -> true, false, "StreamingAsyncHttpClientChannel-idle", null, null));
}
return super.offerChannel(channel);
}
};
}
};
poolMap = result;
}
}
}
return result;
}
Aggregations