Search in sources :

Example 16 with BoxedUnit

use of scala.runtime.BoxedUnit in project distributedlog by twitter.

the class ZKSessionLockFactory method createLock.

@Override
public Future<SessionLock> createLock(String lockPath, DistributedLockContext context) {
    AtomicInteger numRetries = new AtomicInteger(lockCreationRetries);
    final AtomicReference<Throwable> interruptedException = new AtomicReference<Throwable>(null);
    Promise<SessionLock> createPromise = new Promise<SessionLock>(new com.twitter.util.Function<Throwable, BoxedUnit>() {

        @Override
        public BoxedUnit apply(Throwable t) {
            interruptedException.set(t);
            return BoxedUnit.UNIT;
        }
    });
    createLock(lockPath, context, interruptedException, numRetries, createPromise, 0L);
    return createPromise;
}
Also used : Promise(com.twitter.util.Promise) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) AtomicReference(java.util.concurrent.atomic.AtomicReference) BoxedUnit(scala.runtime.BoxedUnit)

Example 17 with BoxedUnit

use of scala.runtime.BoxedUnit in project distributedlog by twitter.

the class TestZKSessionLock method testExecuteLockAction.

@Test(timeout = 60000)
public void testExecuteLockAction() throws Exception {
    String lockPath = "/test-execute-lock-action";
    String clientId = "test-execute-lock-action-" + System.currentTimeMillis();
    ZKSessionLock lock = new ZKSessionLock(zkc, lockPath, clientId, lockStateExecutor);
    final AtomicInteger counter = new AtomicInteger(0);
    // lock action would be executed in same epoch
    final CountDownLatch latch1 = new CountDownLatch(1);
    lock.executeLockAction(lock.getEpoch().get(), new LockAction() {

        @Override
        public void execute() {
            counter.incrementAndGet();
            latch1.countDown();
        }

        @Override
        public String getActionName() {
            return "increment1";
        }
    });
    latch1.await();
    assertEquals("counter should be increased in same epoch", 1, counter.get());
    // lock action would not be executed in same epoch
    final CountDownLatch latch2 = new CountDownLatch(1);
    lock.executeLockAction(lock.getEpoch().get() + 1, new LockAction() {

        @Override
        public void execute() {
            counter.incrementAndGet();
        }

        @Override
        public String getActionName() {
            return "increment2";
        }
    });
    lock.executeLockAction(lock.getEpoch().get(), new LockAction() {

        @Override
        public void execute() {
            latch2.countDown();
        }

        @Override
        public String getActionName() {
            return "countdown";
        }
    });
    latch2.await();
    assertEquals("counter should not be increased in different epochs", 1, counter.get());
    // lock action would not be executed in same epoch and promise would be satisfied with exception
    Promise<BoxedUnit> promise = new Promise<BoxedUnit>();
    lock.executeLockAction(lock.getEpoch().get() + 1, new LockAction() {

        @Override
        public void execute() {
            counter.incrementAndGet();
        }

        @Override
        public String getActionName() {
            return "increment3";
        }
    }, promise);
    try {
        Await.result(promise);
        fail("Should satisfy promise with epoch changed exception.");
    } catch (EpochChangedException ece) {
    // expected
    }
    assertEquals("counter should not be increased in different epochs", 1, counter.get());
    lockStateExecutor.shutdown();
}
Also used : Promise(com.twitter.util.Promise) AtomicInteger(java.util.concurrent.atomic.AtomicInteger) ZKSessionLock(com.twitter.distributedlog.lock.ZKSessionLock) BoxedUnit(scala.runtime.BoxedUnit) CountDownLatch(java.util.concurrent.CountDownLatch) Test(org.junit.Test)

Example 18 with BoxedUnit

use of scala.runtime.BoxedUnit in project samza by apache.

the class KafkaSystemAdmin method getSSPMetadata.

/**
 * Given a set of SystemStreamPartition, fetch metadata from Kafka for each
 * of them, and return a map from ssp to SystemStreamPartitionMetadata for
 * each of them. This method will return null for oldest and newest offsets
 * if a given SystemStreamPartition is empty. This method will block and
 * retry indefinitely until it gets a successful response from Kafka.
 * @param ssps a set of strings of SSP
 * @param retryBackoff retry backoff strategy
 * @return a map from ssp to sspMetadata which has offsets
 */
Map<SystemStreamPartition, SystemStreamMetadata.SystemStreamPartitionMetadata> getSSPMetadata(Set<SystemStreamPartition> ssps, ExponentialSleepStrategy retryBackoff) {
    LOG.info("Fetching SSP metadata for: {}", ssps);
    List<TopicPartition> topicPartitions = ssps.stream().map(ssp -> new TopicPartition(ssp.getStream(), ssp.getPartition().getPartitionId())).collect(Collectors.toList());
    Function1<ExponentialSleepStrategy.RetryLoop, Map<SystemStreamPartition, SystemStreamMetadata.SystemStreamPartitionMetadata>> fetchTopicPartitionMetadataOperation = new AbstractFunction1<ExponentialSleepStrategy.RetryLoop, Map<SystemStreamPartition, SystemStreamMetadata.SystemStreamPartitionMetadata>>() {

        @Override
        public Map<SystemStreamPartition, SystemStreamMetadata.SystemStreamPartitionMetadata> apply(ExponentialSleepStrategy.RetryLoop loop) {
            OffsetsMaps topicPartitionsMetadata = fetchTopicPartitionsMetadata(topicPartitions);
            Map<SystemStreamPartition, SystemStreamMetadata.SystemStreamPartitionMetadata> sspToSSPMetadata = new HashMap<>();
            for (SystemStreamPartition ssp : ssps) {
                String oldestOffset = topicPartitionsMetadata.getOldestOffsets().get(ssp);
                String newestOffset = topicPartitionsMetadata.getNewestOffsets().get(ssp);
                String upcomingOffset = topicPartitionsMetadata.getUpcomingOffsets().get(ssp);
                sspToSSPMetadata.put(ssp, new SystemStreamMetadata.SystemStreamPartitionMetadata(oldestOffset, newestOffset, upcomingOffset));
            }
            loop.done();
            return sspToSSPMetadata;
        }
    };
    Function2<Exception, ExponentialSleepStrategy.RetryLoop, BoxedUnit> onExceptionRetryOperation = new AbstractFunction2<Exception, ExponentialSleepStrategy.RetryLoop, BoxedUnit>() {

        @Override
        public BoxedUnit apply(Exception exception, ExponentialSleepStrategy.RetryLoop loop) {
            if (loop.sleepCount() < MAX_RETRIES_ON_EXCEPTION) {
                LOG.warn(String.format("Fetching SSP metadata for: %s threw an exception. Retrying.", ssps), exception);
            } else {
                LOG.error(String.format("Fetching SSP metadata for: %s threw an exception.", ssps), exception);
                loop.done();
                throw new SamzaException(exception);
            }
            return null;
        }
    };
    Function0<Map<SystemStreamPartition, SystemStreamMetadata.SystemStreamPartitionMetadata>> fallbackOperation = new AbstractFunction0<Map<SystemStreamPartition, SystemStreamMetadata.SystemStreamPartitionMetadata>>() {

        @Override
        public Map<SystemStreamPartition, SystemStreamMetadata.SystemStreamPartitionMetadata> apply() {
            throw new SamzaException("Failed to get SSP metadata");
        }
    };
    return retryBackoff.run(fetchTopicPartitionMetadataOperation, onExceptionRetryOperation).getOrElse(fallbackOperation);
}
Also used : LoggerFactory(org.slf4j.LoggerFactory) StartpointTimestamp(org.apache.samza.startpoint.StartpointTimestamp) Startpoint(org.apache.samza.startpoint.Startpoint) StringUtils(org.apache.commons.lang3.StringUtils) AdminClient(org.apache.kafka.clients.admin.AdminClient) StartpointSpecific(org.apache.samza.startpoint.StartpointSpecific) KafkaConfig(org.apache.samza.config.KafkaConfig) Map(java.util.Map) DeleteTopicsResult(org.apache.kafka.clients.admin.DeleteTopicsResult) MapConfig(org.apache.samza.config.MapConfig) TopicConfig(org.apache.kafka.common.config.TopicConfig) Consumer(org.apache.kafka.clients.consumer.Consumer) TopicPartition(org.apache.kafka.common.TopicPartition) ImmutableSet(com.google.common.collect.ImmutableSet) ImmutableMap(com.google.common.collect.ImmutableMap) Set(java.util.Set) ConsumerConfig(org.apache.kafka.clients.consumer.ConsumerConfig) PartitionInfo(org.apache.kafka.common.PartitionInfo) OffsetAndTimestamp(org.apache.kafka.clients.consumer.OffsetAndTimestamp) Collectors(java.util.stream.Collectors) TopicExistsException(org.apache.kafka.common.errors.TopicExistsException) List(java.util.List) StartpointUpcoming(org.apache.samza.startpoint.StartpointUpcoming) AbstractFunction0(scala.runtime.AbstractFunction0) AbstractFunction1(scala.runtime.AbstractFunction1) Optional(java.util.Optional) AbstractFunction2(scala.runtime.AbstractFunction2) Config(org.apache.samza.config.Config) NotImplementedException(org.apache.commons.lang3.NotImplementedException) StartpointOldest(org.apache.samza.startpoint.StartpointOldest) Function0(scala.Function0) StreamValidationException(org.apache.samza.system.StreamValidationException) Function1(scala.Function1) AtomicBoolean(java.util.concurrent.atomic.AtomicBoolean) Function2(scala.Function2) HashMap(java.util.HashMap) SystemStreamPartition(org.apache.samza.system.SystemStreamPartition) SystemStreamMetadata(org.apache.samza.system.SystemStreamMetadata) Function(java.util.function.Function) StreamConfig(org.apache.samza.config.StreamConfig) RecordsToDelete(org.apache.kafka.clients.admin.RecordsToDelete) HashSet(java.util.HashSet) StartpointVisitor(org.apache.samza.startpoint.StartpointVisitor) DescribeTopicsResult(org.apache.kafka.clients.admin.DescribeTopicsResult) SystemStream(org.apache.samza.system.SystemStream) CreateTopicsResult(org.apache.kafka.clients.admin.CreateTopicsResult) ApplicationConfig(org.apache.samza.config.ApplicationConfig) SystemConfig(org.apache.samza.config.SystemConfig) TopicDescription(org.apache.kafka.clients.admin.TopicDescription) KafkaUtil(org.apache.samza.util.KafkaUtil) Logger(org.slf4j.Logger) Properties(java.util.Properties) ExponentialSleepStrategy(org.apache.samza.util.ExponentialSleepStrategy) NewTopic(org.apache.kafka.clients.admin.NewTopic) Partition(org.apache.samza.Partition) StreamSpec(org.apache.samza.system.StreamSpec) BoxedUnit(scala.runtime.BoxedUnit) SamzaException(org.apache.samza.SamzaException) TimeUnit(java.util.concurrent.TimeUnit) SystemAdmin(org.apache.samza.system.SystemAdmin) Preconditions(com.google.common.base.Preconditions) VisibleForTesting(com.google.common.annotations.VisibleForTesting) Collections(java.util.Collections) AbstractFunction2(scala.runtime.AbstractFunction2) HashMap(java.util.HashMap) ExponentialSleepStrategy(org.apache.samza.util.ExponentialSleepStrategy) SystemStreamMetadata(org.apache.samza.system.SystemStreamMetadata) AbstractFunction0(scala.runtime.AbstractFunction0) AbstractFunction1(scala.runtime.AbstractFunction1) SamzaException(org.apache.samza.SamzaException) TopicExistsException(org.apache.kafka.common.errors.TopicExistsException) NotImplementedException(org.apache.commons.lang3.NotImplementedException) StreamValidationException(org.apache.samza.system.StreamValidationException) SamzaException(org.apache.samza.SamzaException) TopicPartition(org.apache.kafka.common.TopicPartition) BoxedUnit(scala.runtime.BoxedUnit) Map(java.util.Map) ImmutableMap(com.google.common.collect.ImmutableMap) HashMap(java.util.HashMap) SystemStreamPartition(org.apache.samza.system.SystemStreamPartition)

Example 19 with BoxedUnit

use of scala.runtime.BoxedUnit in project Firestorm by Tencent.

the class RssShuffleReader method read.

@Override
public Iterator<Product2<K, C>> read() {
    LOG.info("Shuffle read started:" + getReadInfo());
    Iterator<Product2<K, C>> aggrIter = null;
    Iterator<Product2<K, C>> resultIter = null;
    MultiPartitionIterator rssShuffleDataIterator = new MultiPartitionIterator<K, C>();
    if (shuffleDependency.aggregator().isDefined()) {
        if (shuffleDependency.mapSideCombine()) {
            aggrIter = shuffleDependency.aggregator().get().combineCombinersByKey(rssShuffleDataIterator, context);
        } else {
            aggrIter = shuffleDependency.aggregator().get().combineValuesByKey(rssShuffleDataIterator, context);
        }
    } else {
        aggrIter = rssShuffleDataIterator;
    }
    if (shuffleDependency.keyOrdering().isDefined()) {
        // Create an ExternalSorter to sort the data
        ExternalSorter sorter = new ExternalSorter<K, C, C>(context, Option.empty(), Option.empty(), shuffleDependency.keyOrdering(), serializer);
        LOG.info("Inserting aggregated records to sorter");
        long startTime = System.currentTimeMillis();
        sorter.insertAll(aggrIter);
        LOG.info("Inserted aggregated records to sorter: millis:" + (System.currentTimeMillis() - startTime));
        context.taskMetrics().incMemoryBytesSpilled(sorter.memoryBytesSpilled());
        context.taskMetrics().incPeakExecutionMemory(sorter.peakMemoryUsedBytes());
        context.taskMetrics().incDiskBytesSpilled(sorter.diskBytesSpilled());
        Function0<BoxedUnit> fn0 = new AbstractFunction0<BoxedUnit>() {

            @Override
            public BoxedUnit apply() {
                sorter.stop();
                return BoxedUnit.UNIT;
            }
        };
        Function1<TaskContext, Void> fn1 = new AbstractFunction1<TaskContext, Void>() {

            public Void apply(TaskContext context) {
                sorter.stop();
                return (Void) null;
            }
        };
        context.addTaskCompletionListener(fn1);
        resultIter = CompletionIterator$.MODULE$.apply(sorter.iterator(), fn0);
    } else {
        resultIter = aggrIter;
    }
    if (!(resultIter instanceof InterruptibleIterator)) {
        resultIter = new InterruptibleIterator<>(context, resultIter);
    }
    return resultIter;
}
Also used : TaskContext(org.apache.spark.TaskContext) Product2(scala.Product2) AbstractFunction0(scala.runtime.AbstractFunction0) AbstractFunction1(scala.runtime.AbstractFunction1) InterruptibleIterator(org.apache.spark.InterruptibleIterator) ExternalSorter(org.apache.spark.util.collection.ExternalSorter) BoxedUnit(scala.runtime.BoxedUnit)

Aggregations

BoxedUnit (scala.runtime.BoxedUnit)19 Promise (com.twitter.util.Promise)9 IOException (java.io.IOException)5 UnexpectedException (com.twitter.distributedlog.exceptions.UnexpectedException)4 FutureEventListener (com.twitter.util.FutureEventListener)4 AtomicInteger (java.util.concurrent.atomic.AtomicInteger)4 Stopwatch (com.google.common.base.Stopwatch)3 SafeRunnable (org.apache.bookkeeper.util.SafeRunnable)3 AbstractFunction0 (scala.runtime.AbstractFunction0)3 DLInterruptedException (com.twitter.distributedlog.exceptions.DLInterruptedException)2 OwnershipAcquireFailedException (com.twitter.distributedlog.exceptions.OwnershipAcquireFailedException)2 BulkWriteOp (com.twitter.distributedlog.service.stream.BulkWriteOp)2 Map (java.util.Map)2 RejectedExecutionException (java.util.concurrent.RejectedExecutionException)2 AbstractFunction1 (scala.runtime.AbstractFunction1)2 Props (akka.actor.Props)1 VisibleForTesting (com.google.common.annotations.VisibleForTesting)1 Optional (com.google.common.base.Optional)1 Preconditions (com.google.common.base.Preconditions)1 ImmutableMap (com.google.common.collect.ImmutableMap)1