Search in sources :

Example 1 with TransactionFailureException

use of org.apache.tephra.TransactionFailureException in project phoenix by apache.

the class MutationState method commitDDLFence.

/**
     * Commit a write fence when creating an index so that we can detect
     * when a data table transaction is started before the create index
     * but completes after it. In this case, we need to rerun the data
     * table transaction after the index creation so that the index rows
     * are generated. See {@link #addDMLFence(PTable)} and TEPHRA-157
     * for more information.
     * @param dataTable the data table upon which an index is being added
     * @throws SQLException
     */
public void commitDDLFence(PTable dataTable) throws SQLException {
    if (dataTable.isTransactional()) {
        byte[] key = dataTable.getName().getBytes();
        boolean success = false;
        try {
            FenceWait fenceWait = VisibilityFence.prepareWait(key, connection.getQueryServices().getTransactionSystemClient());
            fenceWait.await(10000, TimeUnit.MILLISECONDS);
            success = true;
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
            throw new SQLExceptionInfo.Builder(SQLExceptionCode.INTERRUPTED_EXCEPTION).setRootCause(e).build().buildException();
        } catch (TimeoutException | TransactionFailureException e) {
            throw new SQLExceptionInfo.Builder(SQLExceptionCode.TX_UNABLE_TO_GET_WRITE_FENCE).setSchemaName(dataTable.getSchemaName().getString()).setTableName(dataTable.getTableName().getString()).build().buildException();
        } finally {
            // TODO: seems like an autonomous tx capability in Tephra would be useful here.
            try {
                txContext.start();
                if (logger.isInfoEnabled() && success)
                    logger.info("Added write fence at ~" + getTransaction().getReadPointer());
            } catch (TransactionFailureException e) {
                throw TransactionUtil.getTransactionFailureException(e);
            }
        }
    }
}
Also used : TransactionFailureException(org.apache.tephra.TransactionFailureException) FenceWait(org.apache.tephra.visibility.FenceWait) PhoenixIndexBuilder(org.apache.phoenix.index.PhoenixIndexBuilder) SQLExceptionInfo(org.apache.phoenix.exception.SQLExceptionInfo) TimeoutException(java.util.concurrent.TimeoutException)

Example 2 with TransactionFailureException

use of org.apache.tephra.TransactionFailureException in project cdap by caskdata.

the class DefaultCheckpointManager method saveCheckpoints.

@Override
public void saveCheckpoints(final Map<Integer, ? extends Checkpoint> checkpoints) throws Exception {
    // if the checkpoints have not changed, we skip writing to table and return.
    if (lastCheckpoint.equals(checkpoints)) {
        return;
    }
    try {
        lastCheckpoint = Transactions.execute(transactional, new TxCallable<Map<Integer, Checkpoint>>() {

            @Override
            public Map<Integer, Checkpoint> call(DatasetContext context) throws Exception {
                Map<Integer, Checkpoint> result = new HashMap<>();
                Table table = getCheckpointTable(context);
                for (Map.Entry<Integer, ? extends Checkpoint> entry : checkpoints.entrySet()) {
                    byte[] key = Bytes.add(rowKeyPrefix, Bytes.toBytes(entry.getKey()));
                    Checkpoint checkpoint = entry.getValue();
                    table.put(key, OFFSET_COL_NAME, Bytes.toBytes(checkpoint.getNextOffset()));
                    table.put(key, NEXT_TIME_COL_NAME, Bytes.toBytes(checkpoint.getNextEventTime()));
                    table.put(key, MAX_TIME_COL_NAME, Bytes.toBytes(checkpoint.getMaxEventTime()));
                    result.put(entry.getKey(), new Checkpoint(checkpoint.getNextOffset(), checkpoint.getNextEventTime(), checkpoint.getMaxEventTime()));
                }
                return result;
            }
        });
    } catch (TransactionFailureException e) {
        throw Transactions.propagate(e, ServiceUnavailableException.class);
    }
    LOG.trace("Saved checkpoints for partitions {}", checkpoints);
}
Also used : TransactionFailureException(org.apache.tephra.TransactionFailureException) Table(co.cask.cdap.api.dataset.table.Table) HashMap(java.util.HashMap) ServiceUnavailableException(co.cask.cdap.common.ServiceUnavailableException) DatasetContext(co.cask.cdap.api.data.DatasetContext) ImmutableMap(com.google.common.collect.ImmutableMap) HashMap(java.util.HashMap) Map(java.util.Map) TxCallable(co.cask.cdap.data2.transaction.TxCallable)

Example 3 with TransactionFailureException

use of org.apache.tephra.TransactionFailureException in project cdap by caskdata.

the class Transactions method execute.

/**
   * Executes the given {@link TxCallable} using the given {@link Transactional}.
   *
   * @param transactional the {@link Transactional} to use for transactional execution.
   * @param callable the {@link TxCallable} to be executed inside a transaction
   * @param <V> type of the result
   * @return value returned by the given {@link TxCallable}.
   * @throws TransactionFailureException if failed to execute the given {@link TxRunnable} in a transaction
   *
   * TODO: CDAP-6103 Move this to {@link Transactional} when revamping tx supports in program.
   */
public static <V> V execute(Transactional transactional, final TxCallable<V> callable) throws TransactionFailureException {
    final AtomicReference<V> result = new AtomicReference<>();
    transactional.execute(new TxRunnable() {

        @Override
        public void run(DatasetContext context) throws Exception {
            result.set(callable.call(context));
        }
    });
    return result.get();
}
Also used : TxRunnable(co.cask.cdap.api.TxRunnable) AtomicReference(java.util.concurrent.atomic.AtomicReference) DatasetContext(co.cask.cdap.api.data.DatasetContext) TransactionFailureException(org.apache.tephra.TransactionFailureException)

Example 4 with TransactionFailureException

use of org.apache.tephra.TransactionFailureException in project cdap by caskdata.

the class MapReduceRuntimeService method destroy.

/**
   * Calls the destroy method of {@link ProgramLifecycle}.
   */
private void destroy(final boolean succeeded, final String failureInfo) throws Exception {
    // if any exception happens during output committing, we want the MapReduce to fail.
    // for that to happen it is not sufficient to set the status to failed, we have to throw an exception,
    // otherwise the shutdown completes successfully and the completed() callback is called.
    // thus: remember the exception and throw it at the end.
    final AtomicReference<Exception> failureCause = new AtomicReference<>();
    // TODO (CDAP-1952): this should be done in the output committer, to make the M/R fail if addPartition fails
    try {
        context.execute(new TxRunnable() {

            @Override
            public void run(DatasetContext ctxt) throws Exception {
                ClassLoader oldClassLoader = ClassLoaders.setContextClassLoader(job.getConfiguration().getClassLoader());
                try {
                    for (Map.Entry<String, ProvidedOutput> output : context.getOutputs().entrySet()) {
                        commitOutput(succeeded, output.getKey(), output.getValue().getOutputFormatProvider(), failureCause);
                        if (succeeded && failureCause.get() != null) {
                            // mapreduce was successful but this output committer failed: call onFailure() for all committers
                            for (ProvidedOutput toFail : context.getOutputs().values()) {
                                commitOutput(false, toFail.getAlias(), toFail.getOutputFormatProvider(), failureCause);
                            }
                            break;
                        }
                    }
                    // if there was a failure, we must throw an exception to fail the transaction
                    // this will roll back all the outputs and also make sure that postCommit() is not called
                    // throwing the failure cause: it will be wrapped in a TxFailure and handled in the outer catch()
                    Exception cause = failureCause.get();
                    if (cause != null) {
                        failureCause.set(null);
                        throw cause;
                    }
                } finally {
                    ClassLoaders.setContextClassLoader(oldClassLoader);
                }
            }
        });
    } catch (TransactionFailureException e) {
        LOG.error("Transaction failure when committing dataset outputs", e);
        if (failureCause.get() != null) {
            failureCause.get().addSuppressed(e);
        } else {
            failureCause.set(e);
        }
    }
    final boolean success = succeeded && failureCause.get() == null;
    context.setState(getProgramState(success, failureInfo));
    final TransactionControl txControl = mapReduce instanceof ProgramLifecycle ? Transactions.getTransactionControl(TransactionControl.IMPLICIT, MapReduce.class, mapReduce, "destroy") : TransactionControl.IMPLICIT;
    try {
        if (TransactionControl.IMPLICIT == txControl) {
            context.execute(new TxRunnable() {

                @Override
                public void run(DatasetContext context) throws Exception {
                    doDestroy(success);
                }
            });
        } else {
            doDestroy(success);
        }
    } catch (Throwable e) {
        if (e instanceof TransactionFailureException && e.getCause() != null && !(e instanceof TransactionConflictException)) {
            e = e.getCause();
        }
        LOG.warn("Error executing the destroy method of the MapReduce program {}", context.getProgram().getName(), e);
    }
    // this is needed to make the run fail if there was an exception. See comment at beginning of this method
    if (failureCause.get() != null) {
        throw failureCause.get();
    }
}
Also used : ProgramLifecycle(co.cask.cdap.api.ProgramLifecycle) TransactionConflictException(org.apache.tephra.TransactionConflictException) AtomicReference(java.util.concurrent.atomic.AtomicReference) ProvidedOutput(co.cask.cdap.internal.app.runtime.batch.dataset.output.ProvidedOutput) ProvisionException(com.google.inject.ProvisionException) IOException(java.io.IOException) TransactionFailureException(org.apache.tephra.TransactionFailureException) URISyntaxException(java.net.URISyntaxException) TransactionConflictException(org.apache.tephra.TransactionConflictException) AbstractMapReduce(co.cask.cdap.api.mapreduce.AbstractMapReduce) MapReduce(co.cask.cdap.api.mapreduce.MapReduce) JarEntry(java.util.jar.JarEntry) TransactionFailureException(org.apache.tephra.TransactionFailureException) TxRunnable(co.cask.cdap.api.TxRunnable) TransactionControl(co.cask.cdap.api.annotation.TransactionControl) WeakReferenceDelegatorClassLoader(co.cask.cdap.common.lang.WeakReferenceDelegatorClassLoader) CombineClassLoader(co.cask.cdap.common.lang.CombineClassLoader) DatasetContext(co.cask.cdap.api.data.DatasetContext)

Example 5 with TransactionFailureException

use of org.apache.tephra.TransactionFailureException in project cdap by caskdata.

the class DynamicTransactionExecutor method executeOnce.

private <I, O> O executeOnce(Function<I, O> function, I input) throws TransactionFailureException {
    TransactionContext txContext = txContextFactory.newTransactionContext();
    txContext.start();
    O o = null;
    try {
        o = function.apply(input);
    } catch (Throwable e) {
        txContext.abort(new TransactionFailureException("Transaction function failure for transaction. ", e));
    // abort will throw
    }
    // will throw if something goes wrong
    txContext.finish();
    return o;
}
Also used : TransactionFailureException(org.apache.tephra.TransactionFailureException) TransactionContext(org.apache.tephra.TransactionContext)

Aggregations

TransactionFailureException (org.apache.tephra.TransactionFailureException)55 Test (org.junit.Test)19 TransactionContext (org.apache.tephra.TransactionContext)17 IOException (java.io.IOException)16 TransactionExecutor (org.apache.tephra.TransactionExecutor)12 TransactionConflictException (org.apache.tephra.TransactionConflictException)8 TxRunnable (co.cask.cdap.api.TxRunnable)6 DatasetContext (co.cask.cdap.api.data.DatasetContext)6 Location (org.apache.twill.filesystem.Location)6 TransactionAware (org.apache.tephra.TransactionAware)5 DataSetException (co.cask.cdap.api.dataset.DataSetException)4 DatasetManagementException (co.cask.cdap.api.dataset.DatasetManagementException)4 Table (co.cask.cdap.api.dataset.table.Table)4 ConsumerConfig (co.cask.cdap.data2.queue.ConsumerConfig)4 List (java.util.List)4 Map (java.util.Map)4 ArrayList (java.util.ArrayList)3 Collection (java.util.Collection)3 TimeoutException (java.util.concurrent.TimeoutException)3 Transaction (org.apache.tephra.Transaction)3