Search in sources :

Example 1 with FailedException

use of org.apache.storm.topology.FailedException in project storm by apache.

the class TransactionalSpoutBatchExecutor method execute.

@Override
public void execute(Tuple input) {
    TransactionAttempt attempt = (TransactionAttempt) input.getValue(0);
    try {
        if (input.getSourceStreamId().equals(TransactionalSpoutCoordinator.TRANSACTION_COMMIT_STREAM_ID)) {
            if (attempt.equals(_activeTransactions.get(attempt.getTransactionId()))) {
                ((ICommitterTransactionalSpout.Emitter) _emitter).commit(attempt);
                _activeTransactions.remove(attempt.getTransactionId());
                _collector.ack(input);
            } else {
                _collector.fail(input);
            }
        } else {
            _emitter.emitBatch(attempt, input.getValue(1), _collector);
            _activeTransactions.put(attempt.getTransactionId(), attempt);
            _collector.ack(input);
            BigInteger committed = (BigInteger) input.getValue(2);
            if (committed != null) {
                // valid to delete before what's been committed since 
                // those batches will never be accessed again
                _activeTransactions.headMap(committed).clear();
                _emitter.cleanupBefore(committed);
            }
        }
    } catch (FailedException e) {
        LOG.warn("Failed to emit batch for transaction", e);
        _collector.fail(input);
    }
}
Also used : FailedException(org.apache.storm.topology.FailedException) BigInteger(java.math.BigInteger)

Example 2 with FailedException

use of org.apache.storm.topology.FailedException in project storm by apache.

the class TransactionalSpoutCoordinator method sync.

private void sync() {
    // note that sometimes the tuples active may be less than max_spout_pending, e.g.
    // max_spout_pending = 3
    // tx 1, 2, 3 active, tx 2 is acked. there won't be a commit for tx 2 (because tx 1 isn't committed yet),
    // and there won't be a batch for tx 4 because there's max_spout_pending tx active
    TransactionStatus maybeCommit = _activeTx.get(_currTransaction);
    if (maybeCommit != null && maybeCommit.status == AttemptStatus.PROCESSED) {
        maybeCommit.status = AttemptStatus.COMMITTING;
        _collector.emit(TRANSACTION_COMMIT_STREAM_ID, new Values(maybeCommit.attempt), maybeCommit.attempt);
    }
    try {
        if (_activeTx.size() < _maxTransactionActive) {
            BigInteger curr = _currTransaction;
            for (int i = 0; i < _maxTransactionActive; i++) {
                if ((_coordinatorState.hasCache(curr) || _coordinator.isReady()) && !_activeTx.containsKey(curr)) {
                    TransactionAttempt attempt = new TransactionAttempt(curr, _rand.nextLong());
                    Object state = _coordinatorState.getState(curr, _initializer);
                    _activeTx.put(curr, new TransactionStatus(attempt));
                    _collector.emit(TRANSACTION_BATCH_STREAM_ID, new Values(attempt, state, previousTransactionId(_currTransaction)), attempt);
                }
                curr = nextTransactionId(curr);
            }
        }
    } catch (FailedException e) {
        LOG.warn("Failed to get metadata for a transaction", e);
    }
}
Also used : FailedException(org.apache.storm.topology.FailedException) Values(org.apache.storm.tuple.Values) BigInteger(java.math.BigInteger)

Example 3 with FailedException

use of org.apache.storm.topology.FailedException in project storm by apache.

the class MongoState method batchRetrieve.

/**
 * Batch retrieve values.
 * @param tridentTuples trident tuples
 * @return values
 */
public List<List<Values>> batchRetrieve(List<TridentTuple> tridentTuples) {
    List<List<Values>> batchRetrieveResult = Lists.newArrayList();
    try {
        for (TridentTuple tuple : tridentTuples) {
            Bson filter = options.queryCreator.createFilter(tuple);
            Document doc = mongoClient.find(filter);
            List<Values> values = options.lookupMapper.toTuple(tuple, doc);
            batchRetrieveResult.add(values);
        }
    } catch (Exception e) {
        LOG.warn("Batch get operation failed. Triggering replay.", e);
        throw new FailedException(e);
    }
    return batchRetrieveResult;
}
Also used : FailedException(org.apache.storm.topology.FailedException) Values(org.apache.storm.tuple.Values) List(java.util.List) Document(org.bson.Document) FailedException(org.apache.storm.topology.FailedException) TridentTuple(org.apache.storm.trident.tuple.TridentTuple) Bson(org.bson.conversions.Bson)

Example 4 with FailedException

use of org.apache.storm.topology.FailedException in project storm by apache.

the class MongoMapState method multiPut.

@Override
public void multiPut(List<List<Object>> keysList, List<T> values) {
    try {
        for (int i = 0; i < keysList.size(); i++) {
            List<Object> keys = keysList.get(i);
            T value = values.get(i);
            Bson filter = options.queryCreator.createFilterByKeys(keys);
            Document document = options.mapper.toDocumentByKeys(keys);
            document.append(options.serDocumentField, this.serializer.serialize(value));
            this.mongoClient.update(filter, document, true, false);
        }
    } catch (Exception e) {
        LOG.warn("Batch write operation failed.", e);
        throw new FailedException(e);
    }
}
Also used : FailedException(org.apache.storm.topology.FailedException) Document(org.bson.Document) FailedException(org.apache.storm.topology.FailedException) Bson(org.bson.conversions.Bson)

Example 5 with FailedException

use of org.apache.storm.topology.FailedException in project storm by apache.

the class TridentBoltExecutor method finishBatch.

private boolean finishBatch(TrackedBatch tracked, Tuple finishTuple) {
    boolean success = true;
    try {
        bolt.finishBatch(tracked.info);
        String stream = coordStream(tracked.info.batchGroup);
        for (Integer task : tracked.condition.targetTasks) {
            collector.emitDirect(task, stream, finishTuple, new Values(tracked.info.batchId, Utils.get(tracked.taskEmittedTuples, task, 0)));
        }
        if (tracked.delayedAck != null) {
            collector.ack(tracked.delayedAck);
            tracked.delayedAck = null;
        }
    } catch (FailedException e) {
        failBatch(tracked, e);
        success = false;
    }
    batches.remove(tracked.info.batchId.getId());
    return success;
}
Also used : FailedException(org.apache.storm.topology.FailedException) ReportedFailedException(org.apache.storm.topology.ReportedFailedException) Values(org.apache.storm.tuple.Values)

Aggregations

FailedException (org.apache.storm.topology.FailedException)27 TridentTuple (org.apache.storm.trident.tuple.TridentTuple)11 ArrayList (java.util.ArrayList)10 Values (org.apache.storm.tuple.Values)8 List (java.util.List)5 IOException (java.io.IOException)4 Document (org.bson.Document)4 Statement (com.datastax.driver.core.Statement)3 Bson (org.bson.conversions.Bson)3 BatchStatement (com.datastax.driver.core.BatchStatement)2 InterruptedIOException (java.io.InterruptedIOException)2 BigInteger (java.math.BigInteger)2 Get (org.apache.hadoop.hbase.client.Get)2 Result (org.apache.hadoop.hbase.client.Result)2 ColumnList (org.apache.storm.hbase.common.ColumnList)2 ReportedFailedException (org.apache.storm.topology.ReportedFailedException)2 ResultSet (com.datastax.driver.core.ResultSet)1 Row (com.datastax.driver.core.Row)1 BufferedWriter (java.io.BufferedWriter)1 OutputStreamWriter (java.io.OutputStreamWriter)1