Search in sources :

Example 96 with Server

use of com.netflix.loadbalancer.Server in project ribbon by Netflix.

the class RestClientTest method testExecuteWithLB.

@Test
public void testExecuteWithLB() throws Exception {
    ConfigurationManager.getConfigInstance().setProperty("allservices.ribbon." + CommonClientConfigKey.ReadTimeout, "10000");
    ConfigurationManager.getConfigInstance().setProperty("allservices.ribbon." + CommonClientConfigKey.FollowRedirects, "true");
    RestClient client = (RestClient) ClientFactory.getNamedClient("allservices");
    BaseLoadBalancer lb = new BaseLoadBalancer();
    Server[] servers = new Server[] { new Server("localhost", server.getServerPort()) };
    lb.addServers(Arrays.asList(servers));
    client.setLoadBalancer(lb);
    Set<URI> expected = new HashSet<URI>();
    expected.add(new URI(server.getServerPath("/")));
    Set<URI> result = new HashSet<URI>();
    HttpRequest request = HttpRequest.newBuilder().uri(new URI("/")).build();
    for (int i = 0; i < 5; i++) {
        HttpResponse response = client.executeWithLoadBalancer(request);
        assertStatusIsOk(response.getStatus());
        assertTrue(response.isSuccess());
        String content = response.getEntity(String.class);
        response.close();
        assertFalse(content.isEmpty());
        result.add(response.getRequestedURI());
    }
    assertEquals(expected, result);
    request = HttpRequest.newBuilder().uri(server.getServerURI()).build();
    HttpResponse response = client.executeWithLoadBalancer(request);
    assertEquals(200, response.getStatus());
}
Also used : HttpRequest(com.netflix.client.http.HttpRequest) Server(com.netflix.loadbalancer.Server) MockHttpServer(com.netflix.client.testutil.MockHttpServer) HttpResponse(com.netflix.client.http.HttpResponse) BaseLoadBalancer(com.netflix.loadbalancer.BaseLoadBalancer) URI(java.net.URI) HashSet(java.util.HashSet) Test(org.junit.Test)

Example 97 with Server

use of com.netflix.loadbalancer.Server in project ribbon by Netflix.

the class LoadBalancingExample method main.

public static void main(String[] args) throws Exception {
    List<Server> servers = Lists.newArrayList(new Server("www.google.com:80"), new Server("www.examples.com:80"), new Server("www.wikipedia.org:80"));
    BaseLoadBalancer lb = LoadBalancerBuilder.newBuilder().buildFixedServerListLoadBalancer(servers);
    LoadBalancingHttpClient<ByteBuf, ByteBuf> client = RibbonTransport.newHttpClient(lb);
    final CountDownLatch latch = new CountDownLatch(servers.size());
    Observer<HttpClientResponse<ByteBuf>> observer = new Observer<HttpClientResponse<ByteBuf>>() {

        @Override
        public void onCompleted() {
        }

        @Override
        public void onError(Throwable e) {
            e.printStackTrace();
        }

        @Override
        public void onNext(HttpClientResponse<ByteBuf> args) {
            latch.countDown();
            System.out.println("Got response: " + args.getStatus());
        }
    };
    for (int i = 0; i < servers.size(); i++) {
        HttpClientRequest<ByteBuf> request = HttpClientRequest.createGet("/");
        client.submit(request).subscribe(observer);
    }
    latch.await();
    System.out.println(lb.getLoadBalancerStats());
}
Also used : Server(com.netflix.loadbalancer.Server) HttpClientResponse(io.reactivex.netty.protocol.http.client.HttpClientResponse) Observer(rx.Observer) BaseLoadBalancer(com.netflix.loadbalancer.BaseLoadBalancer) ByteBuf(io.netty.buffer.ByteBuf) CountDownLatch(java.util.concurrent.CountDownLatch)

Example 98 with Server

use of com.netflix.loadbalancer.Server in project ribbon by Netflix.

the class PrimeConnections method primeConnections.

/**
 * Prime connections, blocking until configured percentage (default is 100%) of target servers are primed
 * or max time is reached.
 *
 * @see CommonClientConfigKey#MinPrimeConnectionsRatio
 * @see CommonClientConfigKey#MaxTotalTimeToPrimeConnections
 */
public void primeConnections(List<Server> servers) {
    if (servers == null || servers.size() == 0) {
        logger.debug("No server to prime");
        return;
    }
    for (Server server : servers) {
        server.setReadyToServe(false);
    }
    int totalCount = (int) (servers.size() * primeRatio);
    final CountDownLatch latch = new CountDownLatch(totalCount);
    final AtomicInteger successCount = new AtomicInteger(0);
    final AtomicInteger failureCount = new AtomicInteger(0);
    primeConnectionsAsync(servers, new PrimeConnectionListener() {

        @Override
        public void primeCompleted(Server s, Throwable lastException) {
            if (lastException == null) {
                successCount.incrementAndGet();
                s.setReadyToServe(true);
            } else {
                failureCount.incrementAndGet();
            }
            latch.countDown();
        }
    });
    Stopwatch stopWatch = initialPrimeTimer.start();
    try {
        latch.await(maxTotalTimeToPrimeConnections, TimeUnit.MILLISECONDS);
    } catch (InterruptedException e) {
        logger.error("Priming connection interrupted", e);
    } finally {
        stopWatch.stop();
    }
    stats = new PrimeConnectionEndStats(totalCount, successCount.get(), failureCount.get(), stopWatch.getDuration(TimeUnit.MILLISECONDS));
    printStats(stats);
}
Also used : Server(com.netflix.loadbalancer.Server) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) Stopwatch(com.netflix.servo.monitor.Stopwatch) CountDownLatch(java.util.concurrent.CountDownLatch)

Example 99 with Server

use of com.netflix.loadbalancer.Server in project ribbon by Netflix.

the class LoadBalancerCommand method submit.

/**
 * Create an {@link Observable} that once subscribed execute network call asynchronously with a server chosen by load balancer.
 * If there are any errors that are indicated as retriable by the {@link RetryHandler}, they will be consumed internally by the
 * function and will not be observed by the {@link Observer} subscribed to the returned {@link Observable}. If number of retries has
 * exceeds the maximal allowed, a final error will be emitted by the returned {@link Observable}. Otherwise, the first successful
 * result during execution and retries will be emitted.
 */
public Observable<T> submit(final ServerOperation<T> operation) {
    final ExecutionInfoContext context = new ExecutionInfoContext();
    if (listenerInvoker != null) {
        try {
            listenerInvoker.onExecutionStart();
        } catch (AbortExecutionException e) {
            return Observable.error(e);
        }
    }
    final int maxRetrysSame = retryHandler.getMaxRetriesOnSameServer();
    final int maxRetrysNext = retryHandler.getMaxRetriesOnNextServer();
    // Use the load balancer
    Observable<T> o = (server == null ? selectServer() : Observable.just(server)).concatMap(new Func1<Server, Observable<T>>() {

        @Override
        public // Called for each server being selected
        Observable<T> call(Server server) {
            context.setServer(server);
            final ServerStats stats = loadBalancerContext.getServerStats(server);
            // Called for each attempt and retry
            Observable<T> o = Observable.just(server).concatMap(new Func1<Server, Observable<T>>() {

                @Override
                public Observable<T> call(final Server server) {
                    context.incAttemptCount();
                    loadBalancerContext.noteOpenConnection(stats);
                    if (listenerInvoker != null) {
                        try {
                            listenerInvoker.onStartWithServer(context.toExecutionInfo());
                        } catch (AbortExecutionException e) {
                            return Observable.error(e);
                        }
                    }
                    final Stopwatch tracer = loadBalancerContext.getExecuteTracer().start();
                    return operation.call(server).doOnEach(new Observer<T>() {

                        private T entity;

                        @Override
                        public void onCompleted() {
                            recordStats(tracer, stats, entity, null);
                        // TODO: What to do if onNext or onError are never called?
                        }

                        @Override
                        public void onError(Throwable e) {
                            recordStats(tracer, stats, null, e);
                            logger.debug("Got error {} when executed on server {}", e, server);
                            if (listenerInvoker != null) {
                                listenerInvoker.onExceptionWithServer(e, context.toExecutionInfo());
                            }
                        }

                        @Override
                        public void onNext(T entity) {
                            this.entity = entity;
                            if (listenerInvoker != null) {
                                listenerInvoker.onExecutionSuccess(entity, context.toExecutionInfo());
                            }
                        }

                        private void recordStats(Stopwatch tracer, ServerStats stats, Object entity, Throwable exception) {
                            tracer.stop();
                            loadBalancerContext.noteRequestCompletion(stats, entity, exception, tracer.getDuration(TimeUnit.MILLISECONDS), retryHandler);
                        }
                    });
                }
            });
            if (maxRetrysSame > 0)
                o = o.retry(retryPolicy(maxRetrysSame, true));
            return o;
        }
    });
    if (maxRetrysNext > 0 && server == null)
        o = o.retry(retryPolicy(maxRetrysNext, false));
    return o.onErrorResumeNext(new Func1<Throwable, Observable<T>>() {

        @Override
        public Observable<T> call(Throwable e) {
            if (context.getAttemptCount() > 0) {
                if (maxRetrysNext > 0 && context.getServerAttemptCount() == (maxRetrysNext + 1)) {
                    e = new ClientException(ClientException.ErrorType.NUMBEROF_RETRIES_NEXTSERVER_EXCEEDED, "Number of retries on next server exceeded max " + maxRetrysNext + " retries, while making a call for: " + context.getServer(), e);
                } else if (maxRetrysSame > 0 && context.getAttemptCount() == (maxRetrysSame + 1)) {
                    e = new ClientException(ClientException.ErrorType.NUMBEROF_RETRIES_EXEEDED, "Number of retries exceeded max " + maxRetrysSame + " retries, while making a call for: " + context.getServer(), e);
                }
            }
            if (listenerInvoker != null) {
                listenerInvoker.onExecutionFailed(e, context.toFinalExecutionInfo());
            }
            return Observable.error(e);
        }
    });
}
Also used : Server(com.netflix.loadbalancer.Server) Stopwatch(com.netflix.servo.monitor.Stopwatch) AbortExecutionException(com.netflix.loadbalancer.reactive.ExecutionListener.AbortExecutionException) Observable(rx.Observable) ServerStats(com.netflix.loadbalancer.ServerStats) ClientException(com.netflix.client.ClientException) Func1(rx.functions.Func1)

Example 100 with Server

use of com.netflix.loadbalancer.Server in project ribbon by Netflix.

the class ServerListLoabBalancerTest method init.

@BeforeClass
public static void init() {
    Configuration config = ConfigurationManager.getConfigInstance();
    config.setProperty("ServerListLoabBalancerTest.ribbon.NFLoadBalancerClassName", com.netflix.loadbalancer.DynamicServerListLoadBalancer.class.getName());
    config.setProperty("ServerListLoabBalancerTest.ribbon.NIWSServerListClassName", FixedServerList.class.getName());
    lb = (DynamicServerListLoadBalancer<Server>) ClientFactory.getNamedLoadBalancer("ServerListLoabBalancerTest");
}
Also used : Configuration(org.apache.commons.configuration.Configuration) Server(com.netflix.loadbalancer.Server) DynamicServerListLoadBalancer(com.netflix.loadbalancer.DynamicServerListLoadBalancer) BeforeClass(org.junit.BeforeClass)

Aggregations

Server (com.netflix.loadbalancer.Server)134 Test (org.junit.Test)98 ArrayList (java.util.ArrayList)40 BaseLoadBalancer (com.netflix.loadbalancer.BaseLoadBalancer)26 ByteBuf (io.netty.buffer.ByteBuf)26 MockWebServer (com.google.mockwebserver.MockWebServer)25 IClientConfig (com.netflix.client.config.IClientConfig)23 AvailabilityFilteringRule (com.netflix.loadbalancer.AvailabilityFilteringRule)20 DummyPing (com.netflix.loadbalancer.DummyPing)18 HttpServer (com.sun.net.httpserver.HttpServer)18 URI (java.net.URI)15 Invocation (org.apache.servicecomb.core.Invocation)14 DynamicServerListLoadBalancer (com.netflix.loadbalancer.DynamicServerListLoadBalancer)12 ServerStats (com.netflix.loadbalancer.ServerStats)12 Person (com.netflix.ribbon.test.resources.EmbeddedResources.Person)12 MockUp (mockit.MockUp)12 ClientException (com.netflix.client.ClientException)11 DefaultClientConfigImpl (com.netflix.client.config.DefaultClientConfigImpl)11 ExecutionListener (com.netflix.loadbalancer.reactive.ExecutionListener)9 HttpClientResponse (io.reactivex.netty.protocol.http.client.HttpClientResponse)9