Search in sources :

Example 1 with LongTracking

use of com.linkedin.common.stats.LongTracking in project rest.li by linkedin.

the class TestAsyncPool method testCreationTimeout.

@Test(dataProvider = "creationTimeoutDataProvider")
public void testCreationTimeout(int poolSize, int concurrency) throws Exception {
    // this object creation life cycle simulate the creation limbo state
    ObjectCreatorThatNeverCreates objectCreatorThatNeverCreates = new ObjectCreatorThatNeverCreates();
    ClockedExecutor clockedExecutor = new ClockedExecutor();
    ExponentialBackOffRateLimiter rateLimiter = new ExponentialBackOffRateLimiter(0, 5000, 10, clockedExecutor, concurrency);
    final AsyncPool<Object> pool = new AsyncPoolImpl<>("object pool", objectCreatorThatNeverCreates, poolSize, Integer.MAX_VALUE, Integer.MAX_VALUE, clockedExecutor, Integer.MAX_VALUE, AsyncPoolImpl.Strategy.MRU, 0, rateLimiter, clockedExecutor, new LongTracking());
    pool.start();
    List<FutureCallback<Object>> checkoutCallbacks = new ArrayList<>();
    // Lets try to checkout more than the max Pool Size times when the object creator is in limbo state
    for (int i = 0; i < poolSize * 2; i++) {
        FutureCallback<Object> cb = new FutureCallback<>();
        checkoutCallbacks.add(cb);
        // Reset the exponential back off due to creation timeout error
        rateLimiter.setPeriod(0);
        pool.get(cb);
        // run for the duration of default creation timeout
        // TODO: parameterize the creation duration when the default creation gets parameterized
        clockedExecutor.runFor(AsyncPoolImpl.DEFAULT_OBJECT_CREATION_TIMEOUT);
    }
    // drain all the pending tasks
    clockedExecutor.runFor(AsyncPoolImpl.DEFAULT_OBJECT_CREATION_TIMEOUT);
    // since the object creator went to limbo state
    for (FutureCallback<Object> cb : checkoutCallbacks) {
        try {
            cb.get(100, TimeUnit.MILLISECONDS);
        } catch (Exception ex) {
            Assert.assertTrue(ex.getCause() instanceof ObjectCreationTimeoutException);
        }
    }
    // Lets make sure the channel pool stats are at expected state
    PoolStats stats = pool.getStats();
    // Lets make sure all the limbo creations are timed out as expected
    Assert.assertEquals(stats.getTotalCreateErrors(), poolSize * 2);
    // No checkout should have happened due to object creator in limbo
    Assert.assertEquals(stats.getCheckedOut(), 0);
    // No Idle objects in the pool
    Assert.assertEquals(stats.getIdleCount(), 0);
    // Lets make sure that all the slots in the pool are reclaimed even if the object creation is in limbo
    Assert.assertEquals(stats.getPoolSize(), 0);
    // Since the max pending creation request reached the max pool size,
    // we should have reached the maPool Size at least once
    Assert.assertEquals(stats.getMaxPoolSize(), poolSize);
    // Since no object is successfully created, expecting idle objects to be zero
    Assert.assertEquals(stats.getIdleCount(), 0);
}
Also used : LongTracking(com.linkedin.common.stats.LongTracking) ArrayList(java.util.ArrayList) ClockedExecutor(com.linkedin.test.util.ClockedExecutor) ObjectCreationTimeoutException(com.linkedin.r2.transport.http.client.ObjectCreationTimeoutException) TimeoutException(java.util.concurrent.TimeoutException) ExecutionException(java.util.concurrent.ExecutionException) PoolStats(com.linkedin.r2.transport.http.client.PoolStats) AsyncPoolImpl(com.linkedin.r2.transport.http.client.AsyncPoolImpl) ExponentialBackOffRateLimiter(com.linkedin.r2.transport.http.client.ExponentialBackOffRateLimiter) FutureCallback(com.linkedin.common.callback.FutureCallback) ObjectCreationTimeoutException(com.linkedin.r2.transport.http.client.ObjectCreationTimeoutException) Test(org.testng.annotations.Test)

Example 2 with LongTracking

use of com.linkedin.common.stats.LongTracking in project rest.li by linkedin.

the class TestAsyncPool method testWaiterTimeout.

/**
 * This test case verifies that the correct number of waiters are timed out while waiting for object from the pool
 *
 *     Assumption: the channel pool max size is always bigger than the requested checkout size
 *
 *|----------A------------|---------------B---------------|---------------C--------------|-------------D--------------
 *   A = In Phase A , N number of object checkout request to the pool when there are no tasks pending in the rate
 *       limiter. A's Expected result = channel pool will create N number of new objects and check them out
 *   B = In Phase B, O number of object checkout request again sent to the channel pool when the pool has already
 *       checkout N number of objects, In this phase, the object creation inside the pool is blocked
 *       and the rate limiter will Queue the creation requests once it reached its maximum concurrency configured.
 *   C = Ih Phase C, P number of objects are returned to the pool which are created in Phase A, this will make
 *       the number of waiter queue size to be O-P
 *   D = In Phase D, A delay will be introduced to timeout the waiters and all the O-P waiters should be timed out.
 *       After the delay the object creation will be unblocked and it should create aleast the concurrency number of
 *       objects even though the waiters are timedout.
 *
 * @param numberOfCheckoutsInPhaseA the N number of checkout operations that will be performed in phase A
 * @param numberOfCheckoutsInPhaseB the O number of checkout operations that will be performed in Phase B
 * @param numbOfObjectsToBeReturnedInPhaseC the numeber of objects returned in Phase C
 * @param poolSize size of the pool,
 * @param concurrency concurrency of the rate limiter
 */
@Test(dataProvider = "waiterTimeoutDataProvider")
public void testWaiterTimeout(int numberOfCheckoutsInPhaseA, int numberOfCheckoutsInPhaseB, int numbOfObjectsToBeReturnedInPhaseC, int poolSize, int concurrency, int waiterTimeout) throws Exception {
    CreationBlockableSynchronousLifecycle blockableObjectCreator = new CreationBlockableSynchronousLifecycle(numberOfCheckoutsInPhaseB, concurrency);
    ScheduledExecutorService executor = Executors.newScheduledThreadPool(500);
    ExponentialBackOffRateLimiter rateLimiter = new ExponentialBackOffRateLimiter(0, 5000, 10, executor, concurrency);
    ClockedExecutor clockedExecutor = new ClockedExecutor();
    final AsyncPool<Object> pool = new AsyncPoolImpl<>("object pool", blockableObjectCreator, poolSize, Integer.MAX_VALUE, waiterTimeout, clockedExecutor, Integer.MAX_VALUE, AsyncPoolImpl.Strategy.MRU, 0, rateLimiter, clockedExecutor, new LongTracking());
    pool.start();
    // Phase A : Checking out object 'numberOfCheckoutsInPhaseA' times !
    List<Object> checkedOutObjects = performCheckout(numberOfCheckoutsInPhaseA, pool);
    // Phase B : Blocking object creation and performing the checkout 'numberOfCheckoutsInPhaseB' times again
    blockableObjectCreator.blockCreation();
    Future<None> future = performUnblockingCheckout(numberOfCheckoutsInPhaseB, 0, pool);
    blockableObjectCreator.waitUntilAllBlocked();
    // Phase C : Returning the checkedOut objects from Phase A back to the object pool
    for (int i = 0; i < numbOfObjectsToBeReturnedInPhaseC; i++) {
        pool.put(checkedOutObjects.remove(0));
    }
    clockedExecutor.runFor(waiterTimeout);
    // Phase D : All the object creation in phase B gets unblocked now
    blockableObjectCreator.unblockCreation();
    try {
        future.get(5, TimeUnit.SECONDS);
    } catch (Exception e) {
        Assert.fail("Did not complete unblocked object creations on time, Unexpected interruption", e);
    }
    // Making sure the rate limiter pending tasks are submitted to the executor
    AssertionMethods.assertWithTimeout(5000, () -> Assert.assertEquals(rateLimiter.numberOfPendingTasks(), 0, "Number of tasks has to drop to 0"));
    executor.shutdown();
    try {
        if (!executor.awaitTermination(10, TimeUnit.SECONDS)) {
            Assert.fail("Executor took too long to shutdown");
        }
    } catch (Exception ex) {
        Assert.fail("Unexpected interruption while shutting down executor", ex);
    }
    PoolStats stats = pool.getStats();
    Assert.assertEquals(stats.getTotalCreationIgnored(), numberOfCheckoutsInPhaseB - concurrency);
    Assert.assertEquals(stats.getCheckedOut(), numberOfCheckoutsInPhaseA);
    Assert.assertEquals(stats.getIdleCount(), concurrency);
    Assert.assertEquals(stats.getTotalCreated(), numberOfCheckoutsInPhaseA + concurrency);
    Assert.assertEquals(stats.getPoolSize(), numberOfCheckoutsInPhaseA + concurrency);
    Assert.assertEquals(stats.getTotalWaiterTimedOut(), numberOfCheckoutsInPhaseB - numbOfObjectsToBeReturnedInPhaseC);
}
Also used : LongTracking(com.linkedin.common.stats.LongTracking) ScheduledExecutorService(java.util.concurrent.ScheduledExecutorService) ClockedExecutor(com.linkedin.test.util.ClockedExecutor) ObjectCreationTimeoutException(com.linkedin.r2.transport.http.client.ObjectCreationTimeoutException) TimeoutException(java.util.concurrent.TimeoutException) ExecutionException(java.util.concurrent.ExecutionException) PoolStats(com.linkedin.r2.transport.http.client.PoolStats) AsyncPoolImpl(com.linkedin.r2.transport.http.client.AsyncPoolImpl) ExponentialBackOffRateLimiter(com.linkedin.r2.transport.http.client.ExponentialBackOffRateLimiter) None(com.linkedin.common.util.None) Test(org.testng.annotations.Test)

Example 3 with LongTracking

use of com.linkedin.common.stats.LongTracking in project rest.li by linkedin.

the class TestAsyncPoolStatsTracker method testIncrements.

@Test
public void testIncrements() {
    AsyncPoolStatsTracker tracker = new AsyncPoolStatsTracker(() -> LIFECYCLE_STATS, () -> MAX_SIZE, () -> MIN_SIZE, () -> POOL_SIZE, () -> CHECKED_OUT, () -> IDLE_SIZE, CLOCK, new LongTracking());
    IntStream.range(0, DESTROY_ERROR_INCREMENTS).forEach(i -> tracker.incrementDestroyErrors());
    IntStream.range(0, DESTROY_INCREMENTS).forEach(i -> tracker.incrementDestroyed());
    IntStream.range(0, TIMEOUT_INCREMENTS).forEach(i -> tracker.incrementTimedOut());
    IntStream.range(0, CREATE_ERROR_INCREMENTS).forEach(i -> tracker.incrementCreateErrors());
    IntStream.range(0, BAD_DESTROY_INCREMENTS).forEach(i -> tracker.incrementBadDestroyed());
    IntStream.range(0, CREATED_INCREMENTS).forEach(i -> tracker.incrementCreated());
    AsyncPoolStats stats = tracker.getStats();
    Assert.assertEquals(stats.getTotalDestroyErrors(), DESTROY_ERROR_INCREMENTS);
    Assert.assertEquals(stats.getTotalDestroyed(), DESTROY_INCREMENTS);
    Assert.assertEquals(stats.getTotalTimedOut(), TIMEOUT_INCREMENTS);
    Assert.assertEquals(stats.getTotalCreateErrors(), CREATE_ERROR_INCREMENTS);
    Assert.assertEquals(stats.getTotalBadDestroyed(), BAD_DESTROY_INCREMENTS);
    Assert.assertEquals(stats.getTotalCreated(), CREATED_INCREMENTS);
    Assert.assertEquals(stats.getCheckedOut(), CHECKED_OUT);
    Assert.assertEquals(stats.getPoolSize(), POOL_SIZE);
}
Also used : LongTracking(com.linkedin.common.stats.LongTracking) AsyncPoolStatsTracker(com.linkedin.r2.transport.http.client.AsyncPoolStatsTracker) AsyncPoolStats(com.linkedin.r2.transport.http.client.AsyncPoolStats) Test(org.testng.annotations.Test)

Example 4 with LongTracking

use of com.linkedin.common.stats.LongTracking in project rest.li by linkedin.

the class TestAsyncPoolStatsTracker method testDefaults.

@Test
public void testDefaults() {
    AsyncPoolStatsTracker tracker = new AsyncPoolStatsTracker(() -> LIFECYCLE_STATS, () -> MAX_SIZE, () -> MIN_SIZE, () -> POOL_SIZE, () -> CHECKED_OUT, () -> IDLE_SIZE, CLOCK, new LongTracking());
    AsyncPoolStats stats = tracker.getStats();
    Assert.assertSame(stats.getLifecycleStats(), LIFECYCLE_STATS);
    Assert.assertEquals(stats.getMaxPoolSize(), MAX_SIZE);
    Assert.assertEquals(stats.getMinPoolSize(), MIN_SIZE);
    Assert.assertEquals(stats.getIdleCount(), IDLE_SIZE);
    Assert.assertEquals(stats.getCheckedOut(), CHECKED_OUT);
    Assert.assertEquals(stats.getPoolSize(), POOL_SIZE);
    Assert.assertEquals(stats.getTotalDestroyErrors(), 0);
    Assert.assertEquals(stats.getTotalDestroyed(), 0);
    Assert.assertEquals(stats.getTotalTimedOut(), 0);
    Assert.assertEquals(stats.getTotalCreateErrors(), 0);
    Assert.assertEquals(stats.getTotalBadDestroyed(), 0);
    Assert.assertEquals(stats.getTotalCreated(), 0);
    Assert.assertEquals(stats.getWaitTime50Pct(), 0);
    Assert.assertEquals(stats.getWaitTime95Pct(), 0);
    Assert.assertEquals(stats.getWaitTime99Pct(), 0);
    Assert.assertEquals(stats.getWaitTimeAvg(), 0.0);
}
Also used : LongTracking(com.linkedin.common.stats.LongTracking) AsyncPoolStatsTracker(com.linkedin.r2.transport.http.client.AsyncPoolStatsTracker) AsyncPoolStats(com.linkedin.r2.transport.http.client.AsyncPoolStats) Test(org.testng.annotations.Test)

Example 5 with LongTracking

use of com.linkedin.common.stats.LongTracking in project rest.li by linkedin.

the class TestAsyncPoolStatsTracker method testMinimumSamplingPeriod.

/**
 * Tests sampled values are the same when #getStats() are called within the same
 * sampling period. Also tests the samplers are correctly updated when #getStats
 * are called in successive sampling periods.
 */
@Test
public void testMinimumSamplingPeriod() {
    SettableClock clock = new SettableClock();
    AsyncPoolStatsTracker tracker = new AsyncPoolStatsTracker(() -> LIFECYCLE_STATS, () -> MAX_SIZE, () -> MIN_SIZE, () -> _poolSize, () -> _checkedOut, () -> IDLE_SIZE, clock, new LongTracking());
    // Samples the max values
    tracker.sampleMaxPoolSize();
    tracker.sampleMaxCheckedOut();
    tracker.sampleMaxWaitTime(WAIT_TIME);
    Assert.assertEquals(tracker.getStats().getSampleMaxPoolSize(), POOL_SIZE);
    Assert.assertEquals(tracker.getStats().getSampleMaxCheckedOut(), CHECKED_OUT);
    Assert.assertEquals(tracker.getStats().getSampleMaxWaitTime(), WAIT_TIME);
    // Without incrementing time we should still be getting the old sampled values
    _poolSize = POOL_SIZE + 10;
    tracker.sampleMaxPoolSize();
    _checkedOut = CHECKED_OUT + 10;
    tracker.sampleMaxCheckedOut();
    tracker.sampleMaxWaitTime(WAIT_TIME + 100);
    Assert.assertEquals(tracker.getStats().getSampleMaxPoolSize(), POOL_SIZE);
    Assert.assertEquals(tracker.getStats().getSampleMaxCheckedOut(), CHECKED_OUT);
    Assert.assertEquals(tracker.getStats().getSampleMaxWaitTime(), WAIT_TIME);
    // After incrementing time we should be getting the new sampled values
    clock.addDuration(SAMPLING_DURATION_INCREMENT);
    Assert.assertEquals(tracker.getStats().getSampleMaxPoolSize(), POOL_SIZE + 10);
    Assert.assertEquals(tracker.getStats().getSampleMaxCheckedOut(), CHECKED_OUT + 10);
    Assert.assertEquals(tracker.getStats().getSampleMaxWaitTime(), WAIT_TIME + 100);
}
Also used : LongTracking(com.linkedin.common.stats.LongTracking) SettableClock(com.linkedin.util.clock.SettableClock) AsyncPoolStatsTracker(com.linkedin.r2.transport.http.client.AsyncPoolStatsTracker) Test(org.testng.annotations.Test)

Aggregations

LongTracking (com.linkedin.common.stats.LongTracking)9 Test (org.testng.annotations.Test)9 AsyncPoolStatsTracker (com.linkedin.r2.transport.http.client.AsyncPoolStatsTracker)5 SettableClock (com.linkedin.util.clock.SettableClock)4 FutureCallback (com.linkedin.common.callback.FutureCallback)3 AsyncPoolImpl (com.linkedin.r2.transport.http.client.AsyncPoolImpl)3 PoolStats (com.linkedin.r2.transport.http.client.PoolStats)3 ExecutionException (java.util.concurrent.ExecutionException)3 TimeoutException (java.util.concurrent.TimeoutException)3 AsyncPoolStats (com.linkedin.r2.transport.http.client.AsyncPoolStats)2 ExponentialBackOffRateLimiter (com.linkedin.r2.transport.http.client.ExponentialBackOffRateLimiter)2 ObjectCreationTimeoutException (com.linkedin.r2.transport.http.client.ObjectCreationTimeoutException)2 ClockedExecutor (com.linkedin.test.util.ClockedExecutor)2 ArrayList (java.util.ArrayList)2 ScheduledExecutorService (java.util.concurrent.ScheduledExecutorService)2 None (com.linkedin.common.util.None)1 RemoteInvocationException (com.linkedin.r2.RemoteInvocationException)1 RequestContext (com.linkedin.r2.message.RequestContext)1 RestRequest (com.linkedin.r2.message.rest.RestRequest)1 RestRequestBuilder (com.linkedin.r2.message.rest.RestRequestBuilder)1