Search in sources :

Example 1 with ClockedExecutor

use of com.linkedin.test.util.ClockedExecutor in project rest.li by linkedin.

the class TestDarkClusterStrategyFactory method setup.

@BeforeMethod
public void setup() {
    _clusterInfoProvider = new MockClusterInfoProvider();
    Facilities facilities = new MockFacilities(_clusterInfoProvider);
    DarkClusterConfig darkClusterConfigOld = createRelativeTrafficMultiplierConfig(0.5f);
    _clusterInfoProvider.addDarkClusterConfig(SOURCE_CLUSTER_NAME, PREEXISTING_DARK_CLUSTER_NAME, darkClusterConfigOld);
    DarkClusterDispatcher darkClusterDispatcher = new DefaultDarkClusterDispatcher(new MockClient(false));
    ClockedExecutor executor = new ClockedExecutor();
    _rateLimiterSupplier = () -> new ConstantQpsRateLimiter(executor, executor, executor, TestConstantQpsDarkClusterStrategy.getBuffer(executor));
    _strategyFactory = new DarkClusterStrategyFactoryImpl(facilities, SOURCE_CLUSTER_NAME, darkClusterDispatcher, new DoNothingNotifier(), new Random(SEED), new CountingVerifierManager(), _rateLimiterSupplier);
    _strategyFactory.start();
}
Also used : DefaultDarkClusterDispatcher(com.linkedin.darkcluster.impl.DefaultDarkClusterDispatcher) DarkClusterDispatcher(com.linkedin.darkcluster.api.DarkClusterDispatcher) DefaultDarkClusterDispatcher(com.linkedin.darkcluster.impl.DefaultDarkClusterDispatcher) ClockedExecutor(com.linkedin.test.util.ClockedExecutor) DarkClusterStrategyFactoryImpl(com.linkedin.darkcluster.impl.DarkClusterStrategyFactoryImpl) Facilities(com.linkedin.d2.balancer.Facilities) ConstantQpsRateLimiter(com.linkedin.r2.transport.http.client.ConstantQpsRateLimiter) Random(java.util.Random) DarkClusterConfig(com.linkedin.d2.DarkClusterConfig) BeforeMethod(org.testng.annotations.BeforeMethod)

Example 2 with ClockedExecutor

use of com.linkedin.test.util.ClockedExecutor in project rest.li by linkedin.

the class TestAsyncPool method testCreationTimeout.

@Test(dataProvider = "creationTimeoutDataProvider")
public void testCreationTimeout(int poolSize, int concurrency) throws Exception {
    // this object creation life cycle simulate the creation limbo state
    ObjectCreatorThatNeverCreates objectCreatorThatNeverCreates = new ObjectCreatorThatNeverCreates();
    ClockedExecutor clockedExecutor = new ClockedExecutor();
    ExponentialBackOffRateLimiter rateLimiter = new ExponentialBackOffRateLimiter(0, 5000, 10, clockedExecutor, concurrency);
    final AsyncPool<Object> pool = new AsyncPoolImpl<>("object pool", objectCreatorThatNeverCreates, poolSize, Integer.MAX_VALUE, Integer.MAX_VALUE, clockedExecutor, Integer.MAX_VALUE, AsyncPoolImpl.Strategy.MRU, 0, rateLimiter, clockedExecutor, new LongTracking());
    pool.start();
    List<FutureCallback<Object>> checkoutCallbacks = new ArrayList<>();
    // Lets try to checkout more than the max Pool Size times when the object creator is in limbo state
    for (int i = 0; i < poolSize * 2; i++) {
        FutureCallback<Object> cb = new FutureCallback<>();
        checkoutCallbacks.add(cb);
        // Reset the exponential back off due to creation timeout error
        rateLimiter.setPeriod(0);
        pool.get(cb);
        // run for the duration of default creation timeout
        // TODO: parameterize the creation duration when the default creation gets parameterized
        clockedExecutor.runFor(AsyncPoolImpl.DEFAULT_OBJECT_CREATION_TIMEOUT);
    }
    // drain all the pending tasks
    clockedExecutor.runFor(AsyncPoolImpl.DEFAULT_OBJECT_CREATION_TIMEOUT);
    // since the object creator went to limbo state
    for (FutureCallback<Object> cb : checkoutCallbacks) {
        try {
            cb.get(100, TimeUnit.MILLISECONDS);
        } catch (Exception ex) {
            Assert.assertTrue(ex.getCause() instanceof ObjectCreationTimeoutException);
        }
    }
    // Lets make sure the channel pool stats are at expected state
    PoolStats stats = pool.getStats();
    // Lets make sure all the limbo creations are timed out as expected
    Assert.assertEquals(stats.getTotalCreateErrors(), poolSize * 2);
    // No checkout should have happened due to object creator in limbo
    Assert.assertEquals(stats.getCheckedOut(), 0);
    // No Idle objects in the pool
    Assert.assertEquals(stats.getIdleCount(), 0);
    // Lets make sure that all the slots in the pool are reclaimed even if the object creation is in limbo
    Assert.assertEquals(stats.getPoolSize(), 0);
    // Since the max pending creation request reached the max pool size,
    // we should have reached the maPool Size at least once
    Assert.assertEquals(stats.getMaxPoolSize(), poolSize);
    // Since no object is successfully created, expecting idle objects to be zero
    Assert.assertEquals(stats.getIdleCount(), 0);
}
Also used : LongTracking(com.linkedin.common.stats.LongTracking) ArrayList(java.util.ArrayList) ClockedExecutor(com.linkedin.test.util.ClockedExecutor) ObjectCreationTimeoutException(com.linkedin.r2.transport.http.client.ObjectCreationTimeoutException) TimeoutException(java.util.concurrent.TimeoutException) ExecutionException(java.util.concurrent.ExecutionException) PoolStats(com.linkedin.r2.transport.http.client.PoolStats) AsyncPoolImpl(com.linkedin.r2.transport.http.client.AsyncPoolImpl) ExponentialBackOffRateLimiter(com.linkedin.r2.transport.http.client.ExponentialBackOffRateLimiter) FutureCallback(com.linkedin.common.callback.FutureCallback) ObjectCreationTimeoutException(com.linkedin.r2.transport.http.client.ObjectCreationTimeoutException) Test(org.testng.annotations.Test)

Example 3 with ClockedExecutor

use of com.linkedin.test.util.ClockedExecutor in project rest.li by linkedin.

the class TestAsyncPool method testWaiterTimeout.

/**
 * This test case verifies that the correct number of waiters are timed out while waiting for object from the pool
 *
 *     Assumption: the channel pool max size is always bigger than the requested checkout size
 *
 *|----------A------------|---------------B---------------|---------------C--------------|-------------D--------------
 *   A = In Phase A , N number of object checkout request to the pool when there are no tasks pending in the rate
 *       limiter. A's Expected result = channel pool will create N number of new objects and check them out
 *   B = In Phase B, O number of object checkout request again sent to the channel pool when the pool has already
 *       checkout N number of objects, In this phase, the object creation inside the pool is blocked
 *       and the rate limiter will Queue the creation requests once it reached its maximum concurrency configured.
 *   C = Ih Phase C, P number of objects are returned to the pool which are created in Phase A, this will make
 *       the number of waiter queue size to be O-P
 *   D = In Phase D, A delay will be introduced to timeout the waiters and all the O-P waiters should be timed out.
 *       After the delay the object creation will be unblocked and it should create aleast the concurrency number of
 *       objects even though the waiters are timedout.
 *
 * @param numberOfCheckoutsInPhaseA the N number of checkout operations that will be performed in phase A
 * @param numberOfCheckoutsInPhaseB the O number of checkout operations that will be performed in Phase B
 * @param numbOfObjectsToBeReturnedInPhaseC the numeber of objects returned in Phase C
 * @param poolSize size of the pool,
 * @param concurrency concurrency of the rate limiter
 */
@Test(dataProvider = "waiterTimeoutDataProvider")
public void testWaiterTimeout(int numberOfCheckoutsInPhaseA, int numberOfCheckoutsInPhaseB, int numbOfObjectsToBeReturnedInPhaseC, int poolSize, int concurrency, int waiterTimeout) throws Exception {
    CreationBlockableSynchronousLifecycle blockableObjectCreator = new CreationBlockableSynchronousLifecycle(numberOfCheckoutsInPhaseB, concurrency);
    ScheduledExecutorService executor = Executors.newScheduledThreadPool(500);
    ExponentialBackOffRateLimiter rateLimiter = new ExponentialBackOffRateLimiter(0, 5000, 10, executor, concurrency);
    ClockedExecutor clockedExecutor = new ClockedExecutor();
    final AsyncPool<Object> pool = new AsyncPoolImpl<>("object pool", blockableObjectCreator, poolSize, Integer.MAX_VALUE, waiterTimeout, clockedExecutor, Integer.MAX_VALUE, AsyncPoolImpl.Strategy.MRU, 0, rateLimiter, clockedExecutor, new LongTracking());
    pool.start();
    // Phase A : Checking out object 'numberOfCheckoutsInPhaseA' times !
    List<Object> checkedOutObjects = performCheckout(numberOfCheckoutsInPhaseA, pool);
    // Phase B : Blocking object creation and performing the checkout 'numberOfCheckoutsInPhaseB' times again
    blockableObjectCreator.blockCreation();
    Future<None> future = performUnblockingCheckout(numberOfCheckoutsInPhaseB, 0, pool);
    blockableObjectCreator.waitUntilAllBlocked();
    // Phase C : Returning the checkedOut objects from Phase A back to the object pool
    for (int i = 0; i < numbOfObjectsToBeReturnedInPhaseC; i++) {
        pool.put(checkedOutObjects.remove(0));
    }
    clockedExecutor.runFor(waiterTimeout);
    // Phase D : All the object creation in phase B gets unblocked now
    blockableObjectCreator.unblockCreation();
    try {
        future.get(5, TimeUnit.SECONDS);
    } catch (Exception e) {
        Assert.fail("Did not complete unblocked object creations on time, Unexpected interruption", e);
    }
    // Making sure the rate limiter pending tasks are submitted to the executor
    AssertionMethods.assertWithTimeout(5000, () -> Assert.assertEquals(rateLimiter.numberOfPendingTasks(), 0, "Number of tasks has to drop to 0"));
    executor.shutdown();
    try {
        if (!executor.awaitTermination(10, TimeUnit.SECONDS)) {
            Assert.fail("Executor took too long to shutdown");
        }
    } catch (Exception ex) {
        Assert.fail("Unexpected interruption while shutting down executor", ex);
    }
    PoolStats stats = pool.getStats();
    Assert.assertEquals(stats.getTotalCreationIgnored(), numberOfCheckoutsInPhaseB - concurrency);
    Assert.assertEquals(stats.getCheckedOut(), numberOfCheckoutsInPhaseA);
    Assert.assertEquals(stats.getIdleCount(), concurrency);
    Assert.assertEquals(stats.getTotalCreated(), numberOfCheckoutsInPhaseA + concurrency);
    Assert.assertEquals(stats.getPoolSize(), numberOfCheckoutsInPhaseA + concurrency);
    Assert.assertEquals(stats.getTotalWaiterTimedOut(), numberOfCheckoutsInPhaseB - numbOfObjectsToBeReturnedInPhaseC);
}
Also used : LongTracking(com.linkedin.common.stats.LongTracking) ScheduledExecutorService(java.util.concurrent.ScheduledExecutorService) ClockedExecutor(com.linkedin.test.util.ClockedExecutor) ObjectCreationTimeoutException(com.linkedin.r2.transport.http.client.ObjectCreationTimeoutException) TimeoutException(java.util.concurrent.TimeoutException) ExecutionException(java.util.concurrent.ExecutionException) PoolStats(com.linkedin.r2.transport.http.client.PoolStats) AsyncPoolImpl(com.linkedin.r2.transport.http.client.AsyncPoolImpl) ExponentialBackOffRateLimiter(com.linkedin.r2.transport.http.client.ExponentialBackOffRateLimiter) None(com.linkedin.common.util.None) Test(org.testng.annotations.Test)

Example 4 with ClockedExecutor

use of com.linkedin.test.util.ClockedExecutor in project rest.li by linkedin.

the class BaseTestSmoothRateLimiter method testSubmitExceedsPermits.

@Test(timeOut = TEST_TIMEOUT)
public void testSubmitExceedsPermits() throws Exception {
    ClockedExecutor clockedExecutor = new ClockedExecutor();
    AsyncRateLimiter rateLimiter = getRateLimiter(clockedExecutor, clockedExecutor, clockedExecutor);
    rateLimiter.setRate(ONE_PERMIT_PER_PERIOD, ONE_MILLISECOND_PERIOD, UNLIMITED_BURST);
    List<FutureCallback<None>> callbacks = new ArrayList<>();
    IntStream.range(0, 5).forEach(i -> {
        FutureCallback<None> callback = new FutureCallback<>();
        rateLimiter.submit(callback);
        callbacks.add(callback);
    });
    Assert.assertEquals(rateLimiter.getPendingTasksCount(), 5);
    // trigger task to run them until current time
    clockedExecutor.runFor(0);
    // We have one permit to begin with so the first task should run immediate and left with 4 pending
    callbacks.get(0).get();
    Assert.assertEquals(rateLimiter.getPendingTasksCount(), 4);
    IntStream.range(0, 1).forEach(i -> assertTrue(callbacks.get(i).isDone()));
    IntStream.range(1, 5).forEach(i -> assertFalse(callbacks.get(i).isDone()));
    // We increment the clock by one period and one more permit should have been issued
    clockedExecutor.runFor(ONE_MILLISECOND_PERIOD);
    callbacks.get(1).get();
    Assert.assertEquals(rateLimiter.getPendingTasksCount(), 3);
    IntStream.range(0, 2).forEach(i -> assertTrue(callbacks.get(i).isDone()));
    IntStream.range(2, 5).forEach(i -> assertFalse(callbacks.get(i).isDone()));
    clockedExecutor.runFor(ONE_MILLISECOND_PERIOD);
    callbacks.get(2).get();
    Assert.assertEquals(rateLimiter.getPendingTasksCount(), 2);
    IntStream.range(0, 3).forEach(i -> assertTrue(callbacks.get(i).isDone()));
    IntStream.range(3, 5).forEach(i -> assertFalse(callbacks.get(i).isDone()));
    clockedExecutor.runFor(ONE_MILLISECOND_PERIOD);
    callbacks.get(3).get();
    Assert.assertEquals(rateLimiter.getPendingTasksCount(), 1);
    IntStream.range(0, 4).forEach(i -> assertTrue(callbacks.get(i).isDone()));
    IntStream.range(4, 5).forEach(i -> assertFalse(callbacks.get(i).isDone()));
    clockedExecutor.runFor(ONE_MILLISECOND_PERIOD);
    callbacks.get(4).get();
    Assert.assertEquals(rateLimiter.getPendingTasksCount(), 0);
    IntStream.range(0, 5).forEach(i -> assertTrue(callbacks.get(i).isDone()));
}
Also used : AsyncRateLimiter(com.linkedin.r2.transport.http.client.AsyncRateLimiter) ArrayList(java.util.ArrayList) ClockedExecutor(com.linkedin.test.util.ClockedExecutor) None(com.linkedin.common.util.None) FutureCallback(com.linkedin.common.callback.FutureCallback) Test(org.testng.annotations.Test)

Example 5 with ClockedExecutor

use of com.linkedin.test.util.ClockedExecutor in project rest.li by linkedin.

the class TestRampUpRateLimiter method testRampUp.

@Test(dataProvider = "targetRamp", timeOut = TEST_TIMEOUT * 1000)
public void testRampUp(int targetPermitsPerPeriod, float rampUp) {
    boolean useRampUpMethod = false;
    for (int k = 0; k < 2; k++, useRampUpMethod = true) {
        _queue.clear();
        ClockedExecutor clockedExecutor = new ClockedExecutor();
        RampUpRateLimiter rateLimiter = new RampUpRateLimiterImpl(new SmoothRateLimiter(clockedExecutor, clockedExecutor, clockedExecutor, _queue, Integer.MAX_VALUE, SmoothRateLimiter.BufferOverflowMode.DROP, RATE_LIMITER_NAME_TEST), clockedExecutor);
        rateLimiter.setRate(0, 1, MINIMUM_BURST, rampUp);
        rateLimiter.setRate(targetPermitsPerPeriod, ONE_SECOND_PERIOD, MINIMUM_BURST, rampUp);
        if (useRampUpMethod) {
            // issue close to 0 permits to have a successful ramp up afterwards
            rateLimiter.setRate(0, 1, MINIMUM_BURST, rampUp);
            rateLimiter.setRate(targetPermitsPerPeriod, ONE_SECOND_PERIOD, MINIMUM_BURST, rampUp);
        }
        AtomicInteger time = new AtomicInteger(0);
        AtomicInteger count = new AtomicInteger(0);
        List<Integer> completionsPerSecond = new ArrayList<>();
        int secondsToReachTargetState = (int) Math.ceil(targetPermitsPerPeriod / rampUp);
        IntStream.range(0, (int) (rampUp * secondsToReachTargetState * (secondsToReachTargetState + 1))).forEach(i -> {
            rateLimiter.submit(new Callback<None>() {

                @Override
                public void onError(Throwable e) {
                    throw new RuntimeException(e);
                }

                @Override
                public void onSuccess(None result) {
                    // counting how many tasks per second we are receiving.
                    if (clockedExecutor.getCurrentTimeMillis() - time.get() >= ONE_SECOND_PERIOD) {
                        time.set(((int) (clockedExecutor.getCurrentTimeMillis() / 1000) * 1000));
                        completionsPerSecond.add(count.get());
                        count.set(1);
                    } else {
                        count.incrementAndGet();
                    }
                }
            });
        });
        // run the clock only for the exact amount of time that is necessary to reach the stable state
        clockedExecutor.runFor((long) ((secondsToReachTargetState + 2) * 1000));
        long countAboveMaxTarget = 0;
        long countAtTarget = 0;
        long countBelowTarget = 0;
        for (Integer i : completionsPerSecond) {
            if (i > targetPermitsPerPeriod)
                countAboveMaxTarget++;
            if (i == targetPermitsPerPeriod)
                countAtTarget++;
            if (i < targetPermitsPerPeriod)
                countBelowTarget++;
        }
        assertEquals(countAboveMaxTarget, 0, "It should never go above the target QPS");
        assertTrue(countAtTarget > 0, "There should be at least one at the target QPS since it should reach the stable state after a while");
        long actualStepsToTarget = (countBelowTarget + 1) + // we want to account for the first seconds in which no task will return if the rampUp<1
        (rampUp < 1 ? (long) (1 / rampUp) - 1 : 0);
        // using countABelowTarget+1, because the one from the last number to the target is never counted
        assertTrue(actualStepsToTarget >= secondsToReachTargetState * 0.9 && actualStepsToTarget <= Math.ceil(secondsToReachTargetState * 1.1), "There should be at least " + secondsToReachTargetState * 0.9 + " steps to get to the target and no more than " + Math.ceil(secondsToReachTargetState * 1.1) + ". Found: " + actualStepsToTarget + ".");
    }
}
Also used : ArrayList(java.util.ArrayList) ClockedExecutor(com.linkedin.test.util.ClockedExecutor) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) SmoothRateLimiter(com.linkedin.r2.transport.http.client.SmoothRateLimiter) None(com.linkedin.common.util.None) Test(org.testng.annotations.Test)

Aggregations

ClockedExecutor (com.linkedin.test.util.ClockedExecutor)16 Test (org.testng.annotations.Test)14 None (com.linkedin.common.util.None)12 ArrayList (java.util.ArrayList)9 ConstantQpsRateLimiter (com.linkedin.r2.transport.http.client.ConstantQpsRateLimiter)7 FutureCallback (com.linkedin.common.callback.FutureCallback)6 AsyncRateLimiter (com.linkedin.r2.transport.http.client.AsyncRateLimiter)3 ExecutionException (java.util.concurrent.ExecutionException)3 LongTracking (com.linkedin.common.stats.LongTracking)2 DarkClusterDispatcher (com.linkedin.darkcluster.api.DarkClusterDispatcher)2 DefaultDarkClusterDispatcher (com.linkedin.darkcluster.impl.DefaultDarkClusterDispatcher)2 RestRequest (com.linkedin.r2.message.rest.RestRequest)2 RestRequestBuilder (com.linkedin.r2.message.rest.RestRequestBuilder)2 AsyncPoolImpl (com.linkedin.r2.transport.http.client.AsyncPoolImpl)2 ExponentialBackOffRateLimiter (com.linkedin.r2.transport.http.client.ExponentialBackOffRateLimiter)2 ObjectCreationTimeoutException (com.linkedin.r2.transport.http.client.ObjectCreationTimeoutException)2 PoolStats (com.linkedin.r2.transport.http.client.PoolStats)2 SmoothRateLimiter (com.linkedin.r2.transport.http.client.SmoothRateLimiter)2 TimeoutException (java.util.concurrent.TimeoutException)2 DarkClusterConfig (com.linkedin.d2.DarkClusterConfig)1