Search in sources :

Example 46 with Future

use of io.netty.util.concurrent.Future in project netty by netty.

the class SocksServerConnectHandler method channelRead0.

@Override
public void channelRead0(final ChannelHandlerContext ctx, final SocksMessage message) throws Exception {
    if (message instanceof Socks4CommandRequest) {
        final Socks4CommandRequest request = (Socks4CommandRequest) message;
        Promise<Channel> promise = ctx.executor().newPromise();
        promise.addListener(new FutureListener<Channel>() {

            @Override
            public void operationComplete(final Future<Channel> future) throws Exception {
                final Channel outboundChannel = future.getNow();
                if (future.isSuccess()) {
                    ChannelFuture responseFuture = ctx.channel().writeAndFlush(new DefaultSocks4CommandResponse(Socks4CommandStatus.SUCCESS));
                    responseFuture.addListener(new ChannelFutureListener() {

                        @Override
                        public void operationComplete(ChannelFuture channelFuture) {
                            ctx.pipeline().remove(SocksServerConnectHandler.this);
                            outboundChannel.pipeline().addLast(new RelayHandler(ctx.channel()));
                            ctx.pipeline().addLast(new RelayHandler(outboundChannel));
                        }
                    });
                } else {
                    ctx.channel().writeAndFlush(new DefaultSocks4CommandResponse(Socks4CommandStatus.REJECTED_OR_FAILED));
                    SocksServerUtils.closeOnFlush(ctx.channel());
                }
            }
        });
        final Channel inboundChannel = ctx.channel();
        b.group(inboundChannel.eventLoop()).channel(NioSocketChannel.class).option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 10000).option(ChannelOption.SO_KEEPALIVE, true).handler(new DirectClientHandler(promise));
        b.connect(request.dstAddr(), request.dstPort()).addListener(new ChannelFutureListener() {

            @Override
            public void operationComplete(ChannelFuture future) throws Exception {
                if (future.isSuccess()) {
                // Connection established use handler provided results
                } else {
                    // Close the connection if the connection attempt has failed.
                    ctx.channel().writeAndFlush(new DefaultSocks4CommandResponse(Socks4CommandStatus.REJECTED_OR_FAILED));
                    SocksServerUtils.closeOnFlush(ctx.channel());
                }
            }
        });
    } else if (message instanceof Socks5CommandRequest) {
        final Socks5CommandRequest request = (Socks5CommandRequest) message;
        Promise<Channel> promise = ctx.executor().newPromise();
        promise.addListener(new FutureListener<Channel>() {

            @Override
            public void operationComplete(final Future<Channel> future) throws Exception {
                final Channel outboundChannel = future.getNow();
                if (future.isSuccess()) {
                    ChannelFuture responseFuture = ctx.channel().writeAndFlush(new DefaultSocks5CommandResponse(Socks5CommandStatus.SUCCESS, request.dstAddrType(), request.dstAddr(), request.dstPort()));
                    responseFuture.addListener(new ChannelFutureListener() {

                        @Override
                        public void operationComplete(ChannelFuture channelFuture) {
                            ctx.pipeline().remove(SocksServerConnectHandler.this);
                            outboundChannel.pipeline().addLast(new RelayHandler(ctx.channel()));
                            ctx.pipeline().addLast(new RelayHandler(outboundChannel));
                        }
                    });
                } else {
                    ctx.channel().writeAndFlush(new DefaultSocks5CommandResponse(Socks5CommandStatus.FAILURE, request.dstAddrType()));
                    SocksServerUtils.closeOnFlush(ctx.channel());
                }
            }
        });
        final Channel inboundChannel = ctx.channel();
        b.group(inboundChannel.eventLoop()).channel(NioSocketChannel.class).option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 10000).option(ChannelOption.SO_KEEPALIVE, true).handler(new DirectClientHandler(promise));
        b.connect(request.dstAddr(), request.dstPort()).addListener(new ChannelFutureListener() {

            @Override
            public void operationComplete(ChannelFuture future) throws Exception {
                if (future.isSuccess()) {
                // Connection established use handler provided results
                } else {
                    // Close the connection if the connection attempt has failed.
                    ctx.channel().writeAndFlush(new DefaultSocks5CommandResponse(Socks5CommandStatus.FAILURE, request.dstAddrType()));
                    SocksServerUtils.closeOnFlush(ctx.channel());
                }
            }
        });
    } else {
        ctx.close();
    }
}
Also used : ChannelFuture(io.netty.channel.ChannelFuture) FutureListener(io.netty.util.concurrent.FutureListener) ChannelFutureListener(io.netty.channel.ChannelFutureListener) Socks5CommandRequest(io.netty.handler.codec.socksx.v5.Socks5CommandRequest) NioSocketChannel(io.netty.channel.socket.nio.NioSocketChannel) Channel(io.netty.channel.Channel) ChannelFutureListener(io.netty.channel.ChannelFutureListener) DefaultSocks5CommandResponse(io.netty.handler.codec.socksx.v5.DefaultSocks5CommandResponse) NioSocketChannel(io.netty.channel.socket.nio.NioSocketChannel) Promise(io.netty.util.concurrent.Promise) ChannelFuture(io.netty.channel.ChannelFuture) Future(io.netty.util.concurrent.Future) Socks4CommandRequest(io.netty.handler.codec.socksx.v4.Socks4CommandRequest) DefaultSocks4CommandResponse(io.netty.handler.codec.socksx.v4.DefaultSocks4CommandResponse)

Example 47 with Future

use of io.netty.util.concurrent.Future in project pulsar by yahoo.

the class ConsumerImpl method sendAcknowledge.

private CompletableFuture<Void> sendAcknowledge(MessageId messageId, AckType ackType) {
    MessageIdImpl msgId = (MessageIdImpl) messageId;
    final ByteBuf cmd = Commands.newAck(consumerId, msgId.getLedgerId(), msgId.getEntryId(), ackType, null);
    // There's no actual response from ack messages
    final CompletableFuture<Void> ackFuture = new CompletableFuture<Void>();
    if (isConnected()) {
        cnx().ctx().writeAndFlush(cmd).addListener(new GenericFutureListener<Future<Void>>() {

            @Override
            public void operationComplete(Future<Void> future) throws Exception {
                if (future.isSuccess()) {
                    if (ackType == AckType.Individual) {
                        unAckedMessageTracker.remove(msgId);
                        // increment counter by 1 for non-batch msg
                        if (!(messageId instanceof BatchMessageIdImpl)) {
                            stats.incrementNumAcksSent(1);
                        }
                    } else if (ackType == AckType.Cumulative) {
                        stats.incrementNumAcksSent(unAckedMessageTracker.removeMessagesTill(msgId));
                    }
                    if (log.isDebugEnabled()) {
                        log.debug("[{}] [{}] [{}] Successfully acknowledged message - {}, acktype {}", subscription, topic, consumerName, messageId, ackType);
                    }
                    ackFuture.complete(null);
                } else {
                    stats.incrementNumAcksFailed();
                    ackFuture.completeExceptionally(new PulsarClientException(future.cause()));
                }
            }
        });
    } else {
        stats.incrementNumAcksFailed();
        ackFuture.completeExceptionally(new PulsarClientException("Not connected to broker. State: " + getState()));
    }
    return ackFuture;
}
Also used : CompletableFuture(java.util.concurrent.CompletableFuture) PulsarClientException(com.yahoo.pulsar.client.api.PulsarClientException) CompletableFuture(java.util.concurrent.CompletableFuture) Future(io.netty.util.concurrent.Future) ByteBuf(io.netty.buffer.ByteBuf) IOException(java.io.IOException) PulsarClientException(com.yahoo.pulsar.client.api.PulsarClientException)

Example 48 with Future

use of io.netty.util.concurrent.Future in project riposte by Nike-Inc.

the class StreamingAsyncHttpClient method getPoolMap.

protected ChannelPoolMap<InetSocketAddress, SimpleChannelPool> getPoolMap() {
    ChannelPoolMap<InetSocketAddress, SimpleChannelPool> result = poolMap;
    if (poolMap == null) {
        /*
                This method gets called for every downstream call, so we don't want to synchronize the whole method. But
                it's easy for multiple threads to get here at the same time when the server starts up, so we need *some*
                kind of protection around the creation of poolMap, hence the elaborate (but correct) double-checked
                locking. Since poolMap is volatile this works, and the local variable "result" helps with speed during
                the normal case where poolMap has already been initialized.
                See https://en.wikipedia.org/wiki/Double-checked_locking
             */
        synchronized (this) {
            result = poolMap;
            if (result == null) {
                EventLoopGroup eventLoopGroup;
                Class<? extends SocketChannel> channelClass;
                if (Epoll.isAvailable()) {
                    logger.info("Creating channel pool. The epoll native transport is available. Using epoll instead of " + "NIO. proxy_router_using_native_epoll_transport=true");
                    eventLoopGroup = new EpollEventLoopGroup(0, createProxyRouterThreadFactory());
                    channelClass = EpollSocketChannel.class;
                } else {
                    logger.info("Creating channel pool. The epoll native transport is NOT available or you are not running " + "on a compatible OS/architecture. Using NIO. " + "proxy_router_using_native_epoll_transport=false");
                    eventLoopGroup = new NioEventLoopGroup(0, createProxyRouterThreadFactory());
                    channelClass = NioSocketChannel.class;
                }
                result = new AbstractChannelPoolMap<InetSocketAddress, SimpleChannelPool>() {

                    @Override
                    protected SimpleChannelPool newPool(InetSocketAddress key) {
                        return new SimpleChannelPool(generateClientBootstrap(eventLoopGroup, channelClass).remoteAddress(key), new ChannelPoolHandlerImpl(), CHANNEL_HEALTH_CHECK_INSTANCE) {

                            @Override
                            public Future<Void> release(Channel channel, Promise<Void> promise) {
                                markChannelBrokenAndLogInfoIfHttpClientCodecStateIsNotZero(channel, "Releasing channel back to pool");
                                return super.release(channel, promise);
                            }

                            @Override
                            protected Channel pollChannel() {
                                Channel channel = super.pollChannel();
                                if (channel != null) {
                                    markChannelBrokenAndLogInfoIfHttpClientCodecStateIsNotZero(channel, "Polling channel to be reused before healthcheck");
                                    if (idleChannelTimeoutMillis > 0) {
                                        /*
                                             We have a channel that is about to be re-used, so disable the idle channel
                                             timeout detector if it exists. By disabling it here we make sure that it is
                                             effectively "gone" before the healthcheck happens, preventing race
                                             conditions. Note that we can't call pipeline.remove() here because we may
                                             not be in the pipeline's event loop, so calling pipeline.remove() could
                                             lead to thread deadlock, but we can't call channel.eventLoop().execute()
                                             because we need it disabled *now* before the healthcheck happens. The
                                             pipeline preparation phase will remove it safely soon, and in the meantime
                                             it will be disabled.
                                             */
                                        ChannelPipeline pipeline = channel.pipeline();
                                        ChannelHandler idleHandler = pipeline.get(DOWNSTREAM_IDLE_CHANNEL_TIMEOUT_HANDLER_NAME);
                                        if (idleHandler != null) {
                                            ((DownstreamIdleChannelTimeoutHandler) idleHandler).disableTimeoutHandling();
                                        }
                                    }
                                }
                                return channel;
                            }

                            @Override
                            protected boolean offerChannel(Channel channel) {
                                if (idleChannelTimeoutMillis > 0) {
                                    // Add an idle channel timeout detector. This will be removed before the
                                    //      channel's reacquisition healthcheck runs (in pollChannel()), so we won't
                                    //      have a race condition where this channel is handed over for use but gets
                                    //      squashed right before it's about to be used.
                                    // NOTE: Due to the semantics of pool.release() we're guaranteed to be in the
                                    //      channel's event loop, so there's no chance of a thread deadlock when
                                    //      messing with the pipeline.
                                    channel.pipeline().addFirst(DOWNSTREAM_IDLE_CHANNEL_TIMEOUT_HANDLER_NAME, new DownstreamIdleChannelTimeoutHandler(idleChannelTimeoutMillis, () -> true, false, "StreamingAsyncHttpClientChannel-idle", null, null));
                                }
                                return super.offerChannel(channel);
                            }
                        };
                    }
                };
                poolMap = result;
            }
        }
    }
    return result;
}
Also used : DownstreamIdleChannelTimeoutHandler(com.nike.riposte.client.asynchttp.netty.downstreampipeline.DownstreamIdleChannelTimeoutHandler) InetSocketAddress(java.net.InetSocketAddress) SocketChannel(io.netty.channel.socket.SocketChannel) NioSocketChannel(io.netty.channel.socket.nio.NioSocketChannel) EpollSocketChannel(io.netty.channel.epoll.EpollSocketChannel) Channel(io.netty.channel.Channel) ChannelHandler(io.netty.channel.ChannelHandler) ChannelPipeline(io.netty.channel.ChannelPipeline) NioEventLoopGroup(io.netty.channel.nio.NioEventLoopGroup) EpollEventLoopGroup(io.netty.channel.epoll.EpollEventLoopGroup) EventLoopGroup(io.netty.channel.EventLoopGroup) EpollEventLoopGroup(io.netty.channel.epoll.EpollEventLoopGroup) CompletableFuture(java.util.concurrent.CompletableFuture) ChannelFuture(io.netty.channel.ChannelFuture) Future(io.netty.util.concurrent.Future) SimpleChannelPool(io.netty.channel.pool.SimpleChannelPool) NioEventLoopGroup(io.netty.channel.nio.NioEventLoopGroup)

Example 49 with Future

use of io.netty.util.concurrent.Future in project tesla by linking12.

the class ConnectionFlow method fail.

@SuppressWarnings({ "unchecked", "rawtypes" })
public void fail(final Throwable cause) {
    final ConnectionState lastStateBeforeFailure = serverConnection.getCurrentState();
    serverConnection.disconnect().addListener(new GenericFutureListener() {

        @Override
        public void operationComplete(Future future) throws Exception {
            synchronized (connectLock) {
                if (!clientConnection.serverConnectionFailed(serverConnection, lastStateBeforeFailure, cause)) {
                    serverConnection.become(ConnectionState.DISCONNECTED);
                    notifyThreadsWaitingForConnection();
                }
            }
        }
    });
}
Also used : Future(io.netty.util.concurrent.Future) ConnectionState(io.github.tesla.gateway.netty.transmit.ConnectionState) GenericFutureListener(io.netty.util.concurrent.GenericFutureListener)

Aggregations

Future (io.netty.util.concurrent.Future)49 FutureListener (io.netty.util.concurrent.FutureListener)29 RFuture (org.redisson.api.RFuture)22 ChannelFuture (io.netty.channel.ChannelFuture)15 ChannelFutureListener (io.netty.channel.ChannelFutureListener)11 Channel (io.netty.channel.Channel)10 ArrayList (java.util.ArrayList)10 IOException (java.io.IOException)9 Timeout (io.netty.util.Timeout)8 TimerTask (io.netty.util.TimerTask)8 AtomicReference (java.util.concurrent.atomic.AtomicReference)8 ScheduledFuture (io.netty.util.concurrent.ScheduledFuture)7 AtomicBoolean (java.util.concurrent.atomic.AtomicBoolean)7 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)7 RedisException (org.redisson.client.RedisException)7 NioSocketChannel (io.netty.channel.socket.nio.NioSocketChannel)6 Collection (java.util.Collection)6 Test (org.junit.Test)6 RedisConnection (org.redisson.client.RedisConnection)6 RedisConnectionException (org.redisson.client.RedisConnectionException)6