Search in sources :

Example 6 with ServerChannel

use of io.netty.channel.ServerChannel in project netty by netty.

the class DetectPeerCloseWithoutReadTest method clientCloseWithoutServerReadIsDetected0.

private void clientCloseWithoutServerReadIsDetected0(final boolean extraReadRequested) throws InterruptedException {
    EventLoopGroup serverGroup = null;
    EventLoopGroup clientGroup = null;
    Channel serverChannel = null;
    try {
        final CountDownLatch latch = new CountDownLatch(1);
        final AtomicInteger bytesRead = new AtomicInteger();
        final int expectedBytes = 100;
        serverGroup = newGroup();
        clientGroup = newGroup();
        ServerBootstrap sb = new ServerBootstrap();
        sb.group(serverGroup);
        sb.channel(serverChannel());
        // Ensure we read only one message per read() call and that we need multiple read()
        // calls to consume everything.
        sb.childOption(ChannelOption.AUTO_READ, false);
        sb.childOption(ChannelOption.MAX_MESSAGES_PER_READ, 1);
        sb.childOption(ChannelOption.RCVBUF_ALLOCATOR, new FixedRecvByteBufAllocator(expectedBytes / 10));
        sb.childHandler(new ChannelInitializer<Channel>() {

            @Override
            protected void initChannel(Channel ch) {
                ch.pipeline().addLast(new TestHandler(bytesRead, extraReadRequested, latch));
            }
        });
        serverChannel = sb.bind(new InetSocketAddress(0)).syncUninterruptibly().channel();
        Bootstrap cb = new Bootstrap();
        cb.group(serverGroup);
        cb.channel(clientChannel());
        cb.handler(new ChannelInboundHandlerAdapter());
        Channel clientChannel = cb.connect(serverChannel.localAddress()).syncUninterruptibly().channel();
        ByteBuf buf = clientChannel.alloc().buffer(expectedBytes);
        buf.writerIndex(buf.writerIndex() + expectedBytes);
        clientChannel.writeAndFlush(buf).addListener(ChannelFutureListener.CLOSE);
        latch.await();
        assertEquals(expectedBytes, bytesRead.get());
    } finally {
        if (serverChannel != null) {
            serverChannel.close().syncUninterruptibly();
        }
        if (serverGroup != null) {
            serverGroup.shutdownGracefully();
        }
        if (clientGroup != null) {
            clientGroup.shutdownGracefully();
        }
    }
}
Also used : InetSocketAddress(java.net.InetSocketAddress) ServerChannel(io.netty.channel.ServerChannel) Channel(io.netty.channel.Channel) CountDownLatch(java.util.concurrent.CountDownLatch) ByteBuf(io.netty.buffer.ByteBuf) ServerBootstrap(io.netty.bootstrap.ServerBootstrap) EventLoopGroup(io.netty.channel.EventLoopGroup) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) FixedRecvByteBufAllocator(io.netty.channel.FixedRecvByteBufAllocator) Bootstrap(io.netty.bootstrap.Bootstrap) ServerBootstrap(io.netty.bootstrap.ServerBootstrap) ChannelInboundHandlerAdapter(io.netty.channel.ChannelInboundHandlerAdapter)

Example 7 with ServerChannel

use of io.netty.channel.ServerChannel in project netty by netty.

the class BootstrapTest method testLateRegisterSuccessBindFailed.

@Test
public void testLateRegisterSuccessBindFailed() throws Exception {
    TestEventLoopGroup group = new TestEventLoopGroup();
    try {
        ServerBootstrap bootstrap = new ServerBootstrap();
        bootstrap.group(group);
        bootstrap.channelFactory(new ChannelFactory<ServerChannel>() {

            @Override
            public ServerChannel newChannel() {
                return new LocalServerChannel() {

                    @Override
                    public ChannelFuture bind(SocketAddress localAddress) {
                        // Close the Channel to emulate what NIO and others impl do on bind failure
                        // See https://github.com/netty/netty/issues/2586
                        close();
                        return newFailedFuture(new SocketException());
                    }

                    @Override
                    public ChannelFuture bind(SocketAddress localAddress, ChannelPromise promise) {
                        // Close the Channel to emulate what NIO and others impl do on bind failure
                        // See https://github.com/netty/netty/issues/2586
                        close();
                        return promise.setFailure(new SocketException());
                    }
                };
            }
        });
        bootstrap.childHandler(new DummyHandler());
        bootstrap.localAddress(new LocalAddress("1"));
        ChannelFuture future = bootstrap.bind();
        assertFalse(future.isDone());
        group.promise.setSuccess();
        final BlockingQueue<Boolean> queue = new LinkedBlockingQueue<Boolean>();
        future.addListener(new ChannelFutureListener() {

            @Override
            public void operationComplete(ChannelFuture future) throws Exception {
                queue.add(future.channel().eventLoop().inEventLoop(Thread.currentThread()));
                queue.add(future.isSuccess());
            }
        });
        assertTrue(queue.take());
        assertFalse(queue.take());
    } finally {
        group.shutdownGracefully();
        group.terminationFuture().sync();
    }
}
Also used : ChannelFuture(io.netty.channel.ChannelFuture) SocketException(java.net.SocketException) LocalAddress(io.netty.channel.local.LocalAddress) ChannelPromise(io.netty.channel.ChannelPromise) LocalServerChannel(io.netty.channel.local.LocalServerChannel) ServerChannel(io.netty.channel.ServerChannel) LinkedBlockingQueue(java.util.concurrent.LinkedBlockingQueue) ChannelFutureListener(io.netty.channel.ChannelFutureListener) SocketException(java.net.SocketException) ConnectException(java.net.ConnectException) UnknownHostException(java.net.UnknownHostException) LocalServerChannel(io.netty.channel.local.LocalServerChannel) SocketAddress(java.net.SocketAddress) Test(org.junit.jupiter.api.Test)

Example 8 with ServerChannel

use of io.netty.channel.ServerChannel in project grpc-java by grpc.

the class AsyncServer method newServer.

static Server newServer(ServerConfiguration config) throws IOException {
    final EventLoopGroup boss;
    final EventLoopGroup worker;
    final Class<? extends ServerChannel> channelType;
    ThreadFactory tf = new DefaultThreadFactory("server-elg-", true);
    switch(config.transport) {
        case NETTY_NIO:
            {
                boss = new NioEventLoopGroup(1, tf);
                worker = new NioEventLoopGroup(0, tf);
                channelType = NioServerSocketChannel.class;
                break;
            }
        case NETTY_EPOLL:
            {
                try {
                    // These classes are only available on linux.
                    Class<?> groupClass = Class.forName("io.netty.channel.epoll.EpollEventLoopGroup");
                    @SuppressWarnings("unchecked") Class<? extends ServerChannel> channelClass = (Class<? extends ServerChannel>) Class.forName("io.netty.channel.epoll.EpollServerSocketChannel");
                    boss = (EventLoopGroup) groupClass.getConstructor(int.class, ThreadFactory.class).newInstance(1, tf);
                    worker = (EventLoopGroup) groupClass.getConstructor(int.class, ThreadFactory.class).newInstance(0, tf);
                    channelType = channelClass;
                    break;
                } catch (Exception e) {
                    throw new RuntimeException(e);
                }
            }
        case NETTY_UNIX_DOMAIN_SOCKET:
            {
                try {
                    // These classes are only available on linux.
                    Class<?> groupClass = Class.forName("io.netty.channel.epoll.EpollEventLoopGroup");
                    @SuppressWarnings("unchecked") Class<? extends ServerChannel> channelClass = (Class<? extends ServerChannel>) Class.forName("io.netty.channel.epoll.EpollServerDomainSocketChannel");
                    boss = (EventLoopGroup) groupClass.getConstructor(int.class, ThreadFactory.class).newInstance(1, tf);
                    worker = (EventLoopGroup) groupClass.getConstructor(int.class, ThreadFactory.class).newInstance(0, tf);
                    channelType = channelClass;
                    break;
                } catch (Exception e) {
                    throw new RuntimeException(e);
                }
            }
        default:
            {
                // Should never get here.
                throw new IllegalArgumentException("Unsupported transport: " + config.transport);
            }
    }
    NettyServerBuilder builder = NettyServerBuilder.forAddress(config.address).bossEventLoopGroup(boss).workerEventLoopGroup(worker).channelType(channelType).addService(new BenchmarkServiceImpl()).flowControlWindow(config.flowControlWindow);
    if (config.tls) {
        System.out.println("Using fake CA for TLS certificate.\n" + "Run the Java client with --tls --testca");
        File cert = TestUtils.loadCert("server1.pem");
        File key = TestUtils.loadCert("server1.key");
        builder.useTransportSecurity(cert, key);
    }
    if (config.directExecutor) {
        builder.directExecutor();
    } else {
        // TODO(carl-mastrangelo): This should not be necessary.  I don't know where this should be
        // put.  Move it somewhere else, or remove it if no longer necessary.
        // See: https://github.com/grpc/grpc-java/issues/2119
        builder.executor(new ForkJoinPool(Runtime.getRuntime().availableProcessors(), new ForkJoinWorkerThreadFactory() {

            final AtomicInteger num = new AtomicInteger();

            @Override
            public ForkJoinWorkerThread newThread(ForkJoinPool pool) {
                ForkJoinWorkerThread thread = ForkJoinPool.defaultForkJoinWorkerThreadFactory.newThread(pool);
                thread.setDaemon(true);
                thread.setName("grpc-server-app-" + "-" + num.getAndIncrement());
                return thread;
            }
        }, UncaughtExceptionHandlers.systemExit(), true));
    }
    return builder.build();
}
Also used : DefaultThreadFactory(io.netty.util.concurrent.DefaultThreadFactory) ThreadFactory(java.util.concurrent.ThreadFactory) ForkJoinWorkerThreadFactory(java.util.concurrent.ForkJoinPool.ForkJoinWorkerThreadFactory) NioServerSocketChannel(io.netty.channel.socket.nio.NioServerSocketChannel) NettyServerBuilder(io.grpc.netty.NettyServerBuilder) ServerChannel(io.netty.channel.ServerChannel) IOException(java.io.IOException) DefaultThreadFactory(io.netty.util.concurrent.DefaultThreadFactory) EventLoopGroup(io.netty.channel.EventLoopGroup) NioEventLoopGroup(io.netty.channel.nio.NioEventLoopGroup) ForkJoinWorkerThreadFactory(java.util.concurrent.ForkJoinPool.ForkJoinWorkerThreadFactory) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) ForkJoinWorkerThread(java.util.concurrent.ForkJoinWorkerThread) File(java.io.File) NioEventLoopGroup(io.netty.channel.nio.NioEventLoopGroup) ForkJoinPool(java.util.concurrent.ForkJoinPool)

Example 9 with ServerChannel

use of io.netty.channel.ServerChannel in project grpc-java by grpc.

the class TransportBenchmark method setUp.

@Setup
public void setUp() throws Exception {
    ServerCredentials serverCreds = InsecureServerCredentials.create();
    ServerBuilder<?> serverBuilder;
    ManagedChannelBuilder<?> channelBuilder;
    switch(transport) {
        case INPROCESS:
            {
                String name = "bench" + Math.random();
                serverBuilder = InProcessServerBuilder.forName(name);
                channelBuilder = InProcessChannelBuilder.forName(name);
                break;
            }
        case NETTY:
            {
                InetSocketAddress address = new InetSocketAddress("localhost", pickUnusedPort());
                serverBuilder = NettyServerBuilder.forAddress(address, serverCreds);
                channelBuilder = NettyChannelBuilder.forAddress(address).negotiationType(NegotiationType.PLAINTEXT);
                break;
            }
        case NETTY_LOCAL:
            {
                String name = "bench" + Math.random();
                LocalAddress address = new LocalAddress(name);
                EventLoopGroup group = new DefaultEventLoopGroup();
                serverBuilder = NettyServerBuilder.forAddress(address, serverCreds).bossEventLoopGroup(group).workerEventLoopGroup(group).channelType(LocalServerChannel.class);
                channelBuilder = NettyChannelBuilder.forAddress(address).eventLoopGroup(group).channelType(LocalChannel.class).negotiationType(NegotiationType.PLAINTEXT);
                groupToShutdown = group;
                break;
            }
        case NETTY_EPOLL:
            {
                InetSocketAddress address = new InetSocketAddress("localhost", pickUnusedPort());
                // Reflection used since they are only available on linux.
                Class<?> groupClass = Class.forName("io.netty.channel.epoll.EpollEventLoopGroup");
                EventLoopGroup group = (EventLoopGroup) groupClass.getConstructor().newInstance();
                Class<? extends ServerChannel> serverChannelClass = Class.forName("io.netty.channel.epoll.EpollServerSocketChannel").asSubclass(ServerChannel.class);
                serverBuilder = NettyServerBuilder.forAddress(address, serverCreds).bossEventLoopGroup(group).workerEventLoopGroup(group).channelType(serverChannelClass);
                Class<? extends Channel> channelClass = Class.forName("io.netty.channel.epoll.EpollSocketChannel").asSubclass(Channel.class);
                channelBuilder = NettyChannelBuilder.forAddress(address).eventLoopGroup(group).channelType(channelClass).negotiationType(NegotiationType.PLAINTEXT);
                groupToShutdown = group;
                break;
            }
        case OKHTTP:
            {
                int port = pickUnusedPort();
                InetSocketAddress address = new InetSocketAddress("localhost", port);
                serverBuilder = NettyServerBuilder.forAddress(address, serverCreds);
                channelBuilder = OkHttpChannelBuilder.forAddress("localhost", port, InsecureChannelCredentials.create());
                break;
            }
        default:
            throw new Exception("Unknown transport: " + transport);
    }
    if (direct) {
        serverBuilder.directExecutor();
        // Because blocking stubs avoid the executor, this doesn't do much.
        channelBuilder.directExecutor();
    }
    server = serverBuilder.addService(new AsyncServer.BenchmarkServiceImpl()).build();
    server.start();
    channel = channelBuilder.build();
    stub = BenchmarkServiceGrpc.newBlockingStub(channel);
    asyncStub = BenchmarkServiceGrpc.newStub(channel);
    // Wait for channel to start
    stub.unaryCall(SimpleRequest.getDefaultInstance());
}
Also used : LocalAddress(io.netty.channel.local.LocalAddress) ServerCredentials(io.grpc.ServerCredentials) InsecureServerCredentials(io.grpc.InsecureServerCredentials) InetSocketAddress(java.net.InetSocketAddress) LocalChannel(io.netty.channel.local.LocalChannel) ManagedChannel(io.grpc.ManagedChannel) LocalServerChannel(io.netty.channel.local.LocalServerChannel) LocalChannel(io.netty.channel.local.LocalChannel) ServerChannel(io.netty.channel.ServerChannel) Channel(io.netty.channel.Channel) AsyncServer(io.grpc.benchmarks.qps.AsyncServer) ByteString(com.google.protobuf.ByteString) LocalServerChannel(io.netty.channel.local.LocalServerChannel) ServerChannel(io.netty.channel.ServerChannel) DefaultEventLoopGroup(io.netty.channel.DefaultEventLoopGroup) StatusRuntimeException(io.grpc.StatusRuntimeException) EventLoopGroup(io.netty.channel.EventLoopGroup) DefaultEventLoopGroup(io.netty.channel.DefaultEventLoopGroup) Setup(org.openjdk.jmh.annotations.Setup)

Example 10 with ServerChannel

use of io.netty.channel.ServerChannel in project riposte by Nike-Inc.

the class Server method startup.

public void startup() throws CertificateException, IOException, InterruptedException {
    if (startedUp) {
        throw new IllegalArgumentException("This Server instance has already started. " + "You can only call startup() once");
    }
    // Figure out what port to bind to.
    int port = Integer.parseInt(System.getProperty("endpointsPort", serverConfig.isEndpointsUseSsl() ? String.valueOf(serverConfig.endpointsSslPort()) : String.valueOf(serverConfig.endpointsPort())));
    // Configure SSL if desired.
    final SslContext sslCtx;
    if (serverConfig.isEndpointsUseSsl()) {
        sslCtx = serverConfig.createSslContext();
    } else {
        sslCtx = null;
    }
    // Configure the server
    EventLoopGroup bossGroup;
    EventLoopGroup workerGroup;
    Class<? extends ServerChannel> channelClass;
    // NIO event loop group.
    if (Epoll.isAvailable()) {
        logger.info("The epoll native transport is available. Using epoll instead of NIO. " + "riposte_server_using_native_epoll_transport=true");
        bossGroup = (serverConfig.bossThreadFactory() == null) ? new EpollEventLoopGroup(serverConfig.numBossThreads()) : new EpollEventLoopGroup(serverConfig.numBossThreads(), serverConfig.bossThreadFactory());
        workerGroup = (serverConfig.workerThreadFactory() == null) ? new EpollEventLoopGroup(serverConfig.numWorkerThreads()) : new EpollEventLoopGroup(serverConfig.numWorkerThreads(), serverConfig.workerThreadFactory());
        channelClass = EpollServerSocketChannel.class;
    } else {
        logger.info("The epoll native transport is NOT available or you are not running on a compatible " + "OS/architecture. Using NIO. riposte_server_using_native_epoll_transport=false");
        bossGroup = (serverConfig.bossThreadFactory() == null) ? new NioEventLoopGroup(serverConfig.numBossThreads()) : new NioEventLoopGroup(serverConfig.numBossThreads(), serverConfig.bossThreadFactory());
        workerGroup = (serverConfig.workerThreadFactory() == null) ? new NioEventLoopGroup(serverConfig.numWorkerThreads()) : new NioEventLoopGroup(serverConfig.numWorkerThreads(), serverConfig.workerThreadFactory());
        channelClass = NioServerSocketChannel.class;
    }
    eventLoopGroups.add(bossGroup);
    eventLoopGroups.add(workerGroup);
    // Figure out which channel initializer should set up the channel pipelines for new channels.
    ChannelInitializer<SocketChannel> channelInitializer = serverConfig.customChannelInitializer();
    if (channelInitializer == null) {
        DistributedTracingConfig<Span> wingtipsDistributedTracingConfig = getOrGenerateWingtipsDistributedTracingConfig(serverConfig);
        // No custom channel initializer, so use the default
        channelInitializer = new HttpChannelInitializer(sslCtx, serverConfig.maxRequestSizeInBytes(), serverConfig.appEndpoints(), serverConfig.requestAndResponseFilters(), serverConfig.longRunningTaskExecutor(), serverConfig.riposteErrorHandler(), serverConfig.riposteUnhandledErrorHandler(), serverConfig.requestContentValidationService(), serverConfig.defaultRequestContentDeserializer(), new ResponseSender(serverConfig.defaultResponseContentSerializer(), serverConfig.errorResponseBodySerializer(), wingtipsDistributedTracingConfig), serverConfig.metricsListener(), serverConfig.defaultCompletableFutureTimeoutInMillisForNonblockingEndpoints(), serverConfig.accessLogger(), serverConfig.pipelineCreateHooks(), serverConfig.requestSecurityValidator(), serverConfig.workerChannelIdleTimeoutMillis(), serverConfig.proxyRouterConnectTimeoutMillis(), serverConfig.incompleteHttpCallTimeoutMillis(), serverConfig.maxOpenIncomingServerChannels(), serverConfig.isDebugChannelLifecycleLoggingEnabled(), serverConfig.userIdHeaderKeys(), serverConfig.responseCompressionThresholdBytes(), serverConfig.httpRequestDecoderConfig(), wingtipsDistributedTracingConfig);
    }
    // Create the server bootstrap
    ServerBootstrap b = new ServerBootstrap();
    b.group(bossGroup, workerGroup).channel(channelClass).childHandler(channelInitializer);
    // execute pre startup hooks
    List<@NotNull PreServerStartupHook> preServerStartupHooks = serverConfig.preServerStartupHooks();
    if (preServerStartupHooks != null) {
        for (PreServerStartupHook hook : preServerStartupHooks) {
            hook.executePreServerStartupHook(b);
        }
    }
    if (serverConfig.isDebugChannelLifecycleLoggingEnabled())
        b.handler(new LoggingHandler(SERVER_BOSS_CHANNEL_DEBUG_LOGGER_NAME, LogLevel.DEBUG));
    // Bind the server to the desired port and start it up so it is ready to receive requests
    Channel ch = b.bind(port).sync().channel();
    // execute post startup hooks
    List<@NotNull PostServerStartupHook> postServerStartupHooks = serverConfig.postServerStartupHooks();
    if (postServerStartupHooks != null) {
        for (PostServerStartupHook hook : postServerStartupHooks) {
            hook.executePostServerStartupHook(serverConfig, ch);
        }
    }
    channels.add(ch);
    logger.info("Server channel open and accepting " + (serverConfig.isEndpointsUseSsl() ? "https" : "http") + " requests on port " + port);
    startedUp = true;
    // Add a shutdown hook so we can gracefully stop the server when the JVM is going down
    Runtime.getRuntime().addShutdownHook(new Thread(() -> {
        try {
            shutdown();
        } catch (Exception e) {
            logger.warn("Error shutting down Riposte", e);
            throw new RuntimeException(e);
        }
    }));
}
Also used : EpollServerSocketChannel(io.netty.channel.epoll.EpollServerSocketChannel) SocketChannel(io.netty.channel.socket.SocketChannel) NioServerSocketChannel(io.netty.channel.socket.nio.NioServerSocketChannel) LoggingHandler(io.netty.handler.logging.LoggingHandler) EpollServerSocketChannel(io.netty.channel.epoll.EpollServerSocketChannel) SocketChannel(io.netty.channel.socket.SocketChannel) NioServerSocketChannel(io.netty.channel.socket.nio.NioServerSocketChannel) ServerChannel(io.netty.channel.ServerChannel) Channel(io.netty.channel.Channel) Span(com.nike.wingtips.Span) ResponseSender(com.nike.riposte.server.http.ResponseSender) ServerBootstrap(io.netty.bootstrap.ServerBootstrap) IOException(java.io.IOException) CertificateException(java.security.cert.CertificateException) EpollEventLoopGroup(io.netty.channel.epoll.EpollEventLoopGroup) EventLoopGroup(io.netty.channel.EventLoopGroup) NioEventLoopGroup(io.netty.channel.nio.NioEventLoopGroup) EpollEventLoopGroup(io.netty.channel.epoll.EpollEventLoopGroup) HttpChannelInitializer(com.nike.riposte.server.channelpipeline.HttpChannelInitializer) PostServerStartupHook(com.nike.riposte.server.hooks.PostServerStartupHook) NioEventLoopGroup(io.netty.channel.nio.NioEventLoopGroup) SslContext(io.netty.handler.ssl.SslContext) PreServerStartupHook(com.nike.riposte.server.hooks.PreServerStartupHook)

Aggregations

ServerChannel (io.netty.channel.ServerChannel)14 Channel (io.netty.channel.Channel)10 ServerBootstrap (io.netty.bootstrap.ServerBootstrap)8 NioServerSocketChannel (io.netty.channel.socket.nio.NioServerSocketChannel)6 ChannelFuture (io.netty.channel.ChannelFuture)5 EventLoopGroup (io.netty.channel.EventLoopGroup)5 InetSocketAddress (java.net.InetSocketAddress)5 Bootstrap (io.netty.bootstrap.Bootstrap)4 ByteBuf (io.netty.buffer.ByteBuf)4 ChannelFutureListener (io.netty.channel.ChannelFutureListener)4 LocalServerChannel (io.netty.channel.local.LocalServerChannel)4 ChannelInitializer (io.netty.channel.ChannelInitializer)3 LocalAddress (io.netty.channel.local.LocalAddress)3 IOException (java.io.IOException)3 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)3 ChannelHandlerContext (io.netty.channel.ChannelHandlerContext)2 ChannelInboundHandlerAdapter (io.netty.channel.ChannelInboundHandlerAdapter)2 NioEventLoopGroup (io.netty.channel.nio.NioEventLoopGroup)2 MAX_PORT_NUMBER (io.scalecube.transport.Addressing.MAX_PORT_NUMBER)2 MIN_PORT_NUMBER (io.scalecube.transport.Addressing.MIN_PORT_NUMBER)2