Search in sources :

Example 91 with Transaction

use of org.apache.tephra.Transaction in project cdap by caskdata.

the class SingleThreadDatasetCache method addExtraTransactionAware.

@Override
public void addExtraTransactionAware(TransactionAware txAware) {
    // Using LIFO ordering is to allow TMS tx aware always be the last.
    if (!extraTxAwares.contains(txAware)) {
        extraTxAwares.addFirst(txAware);
        // Starts the transaction on the tx aware if there is an active transaction
        Transaction currentTx = txContext == null ? null : txContext.getCurrentTransaction();
        if (currentTx != null) {
            txAware.startTx(currentTx);
        }
    }
}
Also used : Transaction(org.apache.tephra.Transaction)

Example 92 with Transaction

use of org.apache.tephra.Transaction in project cdap by caskdata.

the class DequeueScanObserver method preScannerOpen.

@Override
public RegionScanner preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e, Scan scan, RegionScanner s) throws IOException {
    ConsumerConfig consumerConfig = DequeueScanAttributes.getConsumerConfig(scan);
    Transaction tx = DequeueScanAttributes.getTx(scan);
    if (consumerConfig == null || tx == null) {
        return super.preScannerOpen(e, scan, s);
    }
    Filter dequeueFilter = new DequeueFilter(consumerConfig, tx);
    Filter existing = scan.getFilter();
    if (existing != null) {
        Filter combined = new FilterList(FilterList.Operator.MUST_PASS_ALL, existing, dequeueFilter);
        scan.setFilter(combined);
    } else {
        scan.setFilter(dequeueFilter);
    }
    return super.preScannerOpen(e, scan, s);
}
Also used : Transaction(org.apache.tephra.Transaction) Filter(org.apache.hadoop.hbase.filter.Filter) ConsumerConfig(co.cask.cdap.data2.queue.ConsumerConfig) FilterList(org.apache.hadoop.hbase.filter.FilterList)

Example 93 with Transaction

use of org.apache.tephra.Transaction in project cdap by caskdata.

the class DequeueScanObserver method preScannerOpen.

@Override
public RegionScanner preScannerOpen(ObserverContext<RegionCoprocessorEnvironment> e, Scan scan, RegionScanner s) throws IOException {
    ConsumerConfig consumerConfig = DequeueScanAttributes.getConsumerConfig(scan);
    Transaction tx = DequeueScanAttributes.getTx(scan);
    if (consumerConfig == null || tx == null) {
        return super.preScannerOpen(e, scan, s);
    }
    Filter dequeueFilter = new DequeueFilter(consumerConfig, tx);
    Filter existing = scan.getFilter();
    if (existing != null) {
        Filter combined = new FilterList(FilterList.Operator.MUST_PASS_ALL, existing, dequeueFilter);
        scan.setFilter(combined);
    } else {
        scan.setFilter(dequeueFilter);
    }
    return super.preScannerOpen(e, scan, s);
}
Also used : Transaction(org.apache.tephra.Transaction) Filter(org.apache.hadoop.hbase.filter.Filter) ConsumerConfig(co.cask.cdap.data2.queue.ConsumerConfig) FilterList(org.apache.hadoop.hbase.filter.FilterList)

Example 94 with Transaction

use of org.apache.tephra.Transaction in project cdap by caskdata.

the class AppWithCustomTx method recordTransaction.

/**
 * If in a transaction, records the timeout that the current transaction was given, or "default" if no explicit
 * timeout was given. Otherwise does nothing.
 *
 * Note: we know whether and what explicit timeout was given, because we inject a {@link RevealingTxSystemClient},
 *       which returns a {@link RevealingTransaction} for {@link TransactionSystemClient#startShort(int)} only.
 */
static void recordTransaction(DatasetContext context, String row, String column) {
    TransactionCapturingTable capture = context.getDataset(CAPTURE);
    Transaction tx = capture.getTx();
    // we cannot cast because the RevealingTransaction is not visible in the program class loader
    String value = DEFAULT;
    if (tx == null) {
        try {
            capture.getTable().put(new Put(row, column, value));
            throw new RuntimeException("put to table without transaction should have failed.");
        } catch (DataSetException e) {
        // expected
        }
        return;
    }
    if ("RevealingTransaction".equals(tx.getClass().getSimpleName())) {
        int txTimeout;
        try {
            txTimeout = (int) tx.getClass().getField("timeout").get(tx);
        } catch (Exception e) {
            throw Throwables.propagate(e);
        }
        value = String.valueOf(txTimeout);
    }
    capture.getTable().put(new Put(row, column, value));
}
Also used : Transaction(org.apache.tephra.Transaction) RevealingTransaction(co.cask.cdap.test.RevealingTxSystemClient.RevealingTransaction) DataSetException(co.cask.cdap.api.dataset.DataSetException) Put(co.cask.cdap.api.dataset.table.Put) TransactionFailureException(org.apache.tephra.TransactionFailureException) IOException(java.io.IOException) DataSetException(co.cask.cdap.api.dataset.DataSetException)

Example 95 with Transaction

use of org.apache.tephra.Transaction in project cdap by caskdata.

the class HBaseTableExporter method doMain.

public void doMain(String[] args) throws Exception {
    if (args.length < 1) {
        printHelp();
        return;
    }
    String tableName = args[0];
    try {
        startUp();
        Transaction tx = txClient.startLong();
        Job job = createSubmittableJob(tx, tableName);
        if (!job.waitForCompletion(true)) {
            LOG.info("MapReduce job failed!");
            throw new RuntimeException("Failed to run the MapReduce job.");
        }
        // Always commit the transaction, since we are not doing any data update
        // operation in this tool.
        txClient.commitOrThrow(tx);
        System.out.println("Export operation complete. HFiles are stored at location " + bulkloadDir.toString());
    } finally {
        stop();
    }
}
Also used : Transaction(org.apache.tephra.Transaction) Job(org.apache.hadoop.mapreduce.Job)

Aggregations

Transaction (org.apache.tephra.Transaction)99 Test (org.junit.Test)54 TransactionAware (org.apache.tephra.TransactionAware)34 Table (co.cask.cdap.api.dataset.table.Table)29 DatasetAdmin (co.cask.cdap.api.dataset.DatasetAdmin)27 HBaseTable (co.cask.cdap.data2.dataset2.lib.table.hbase.HBaseTable)22 Put (co.cask.cdap.api.dataset.table.Put)12 DatasetProperties (co.cask.cdap.api.dataset.DatasetProperties)11 Get (co.cask.cdap.api.dataset.table.Get)10 TransactionSystemClient (org.apache.tephra.TransactionSystemClient)10 Row (co.cask.cdap.api.dataset.table.Row)8 ConsumerConfig (co.cask.cdap.data2.queue.ConsumerConfig)8 KeyStructValueTableDefinition (co.cask.cdap.explore.service.datasets.KeyStructValueTableDefinition)8 Scan (co.cask.cdap.api.dataset.table.Scan)7 ArrayList (java.util.ArrayList)7 CConfiguration (co.cask.cdap.common.conf.CConfiguration)6 ExploreExecutionResult (co.cask.cdap.explore.client.ExploreExecutionResult)6 DatasetId (co.cask.cdap.proto.id.DatasetId)6 IOException (java.io.IOException)6 BufferingTableTest (co.cask.cdap.data2.dataset2.lib.table.BufferingTableTest)5