Search in sources :

Example 1 with IndexMetaData

use of org.apache.phoenix.hbase.index.covered.IndexMetaData in project phoenix by apache.

the class IndexBuildManager method getIndexUpdate.

public Collection<Pair<Mutation, byte[]>> getIndexUpdate(MiniBatchOperationInProgress<Mutation> miniBatchOp, Collection<? extends Mutation> mutations) throws Throwable {
    // notify the delegate that we have started processing a batch
    final IndexMetaData indexMetaData = this.delegate.getIndexMetaData(miniBatchOp);
    this.delegate.batchStarted(miniBatchOp, indexMetaData);
    // parallelize each mutation into its own task
    // each task is cancelable via two mechanisms: (1) underlying HRegion is closing (which would
    // fail lookups/scanning) and (2) by stopping this via the #stop method. Interrupts will only be
    // acknowledged on each thread before doing the actual lookup, but after that depends on the
    // underlying builder to look for the closed flag.
    TaskBatch<Collection<Pair<Mutation, byte[]>>> tasks = new TaskBatch<Collection<Pair<Mutation, byte[]>>>(mutations.size());
    for (final Mutation m : mutations) {
        tasks.add(new Task<Collection<Pair<Mutation, byte[]>>>() {

            @Override
            public Collection<Pair<Mutation, byte[]>> call() throws IOException {
                return delegate.getIndexUpdate(m, indexMetaData);
            }
        });
    }
    List<Collection<Pair<Mutation, byte[]>>> allResults = null;
    try {
        allResults = pool.submitUninterruptible(tasks);
    } catch (CancellationException e) {
        throw e;
    } catch (ExecutionException e) {
        LOG.error("Found a failed index update!");
        throw e.getCause();
    }
    // we can only get here if we get successes from each of the tasks, so each of these must have a
    // correct result
    Collection<Pair<Mutation, byte[]>> results = new ArrayList<Pair<Mutation, byte[]>>();
    for (Collection<Pair<Mutation, byte[]>> result : allResults) {
        assert result != null : "Found an unsuccessful result, but didn't propagate a failure earlier";
        results.addAll(result);
    }
    return results;
}
Also used : ArrayList(java.util.ArrayList) IOException(java.io.IOException) TaskBatch(org.apache.phoenix.hbase.index.parallel.TaskBatch) IndexMetaData(org.apache.phoenix.hbase.index.covered.IndexMetaData) CancellationException(java.util.concurrent.CancellationException) Collection(java.util.Collection) Mutation(org.apache.hadoop.hbase.client.Mutation) ExecutionException(java.util.concurrent.ExecutionException) Pair(org.apache.hadoop.hbase.util.Pair)

Aggregations

IOException (java.io.IOException)1 ArrayList (java.util.ArrayList)1 Collection (java.util.Collection)1 CancellationException (java.util.concurrent.CancellationException)1 ExecutionException (java.util.concurrent.ExecutionException)1 Mutation (org.apache.hadoop.hbase.client.Mutation)1 Pair (org.apache.hadoop.hbase.util.Pair)1 IndexMetaData (org.apache.phoenix.hbase.index.covered.IndexMetaData)1 TaskBatch (org.apache.phoenix.hbase.index.parallel.TaskBatch)1