Search in sources :

Example 1 with InMemoryTxSystemClient

use of org.apache.tephra.inmemory.InMemoryTxSystemClient in project cdap by caskdata.

the class PartitionedFileSetTest method before.

@Before
public void before() throws Exception {
    txClient = new InMemoryTxSystemClient(dsFrameworkUtil.getTxManager());
    dsFrameworkUtil.createInstance("partitionedFileSet", pfsInstance, PartitionedFileSetProperties.builder().setPartitioning(PARTITIONING_1).setTablePermissions(tablePermissions).setBasePath("testDir").setFilePermissions(fsPermissions).setFileGroup(group).build());
    pfsBaseLocation = ((PartitionedFileSet) dsFrameworkUtil.getInstance(pfsInstance)).getEmbeddedFileSet().getBaseLocation();
    Assert.assertTrue(pfsBaseLocation.exists());
}
Also used : PartitionedFileSet(co.cask.cdap.api.dataset.lib.PartitionedFileSet) InMemoryTxSystemClient(org.apache.tephra.inmemory.InMemoryTxSystemClient) Before(org.junit.Before)

Example 2 with InMemoryTxSystemClient

use of org.apache.tephra.inmemory.InMemoryTxSystemClient in project cdap by caskdata.

the class TableTest method before.

@Before
public void before() {
    Configuration txConf = HBaseConfiguration.create();
    TransactionManager txManager = new TransactionManager(txConf);
    txManager.startAndWait();
    txClient = new InMemoryTxSystemClient(txManager);
}
Also used : Configuration(org.apache.hadoop.conf.Configuration) HBaseConfiguration(org.apache.hadoop.hbase.HBaseConfiguration) TransactionManager(org.apache.tephra.TransactionManager) InMemoryTxSystemClient(org.apache.tephra.inmemory.InMemoryTxSystemClient) Before(org.junit.Before)

Example 3 with InMemoryTxSystemClient

use of org.apache.tephra.inmemory.InMemoryTxSystemClient in project cdap by caskdata.

the class DatasetOpExecutorServiceTest method testRest.

@Test
public void testRest() throws Exception {
    // check non-existence with 404
    testAdminOp(bob, "exists", 404, null);
    // add instance, should automatically create an instance
    dsFramework.addInstance("table", bob, DatasetProperties.EMPTY);
    testAdminOp(bob, "exists", 200, true);
    testAdminOp("bob", "exists", 404, null);
    // check truncate
    final Table table = dsFramework.getDataset(bob, DatasetDefinition.NO_ARGUMENTS, null);
    Assert.assertNotNull(table);
    TransactionExecutor txExecutor = new DefaultTransactionExecutor(new InMemoryTxSystemClient(txManager), ImmutableList.of((TransactionAware) table));
    // writing smth to table
    txExecutor.execute(new TransactionExecutor.Subroutine() {

        @Override
        public void apply() throws Exception {
            table.put(new Put("key1", "col1", "val1"));
        }
    });
    // verify that we can read the data
    txExecutor.execute(new TransactionExecutor.Subroutine() {

        @Override
        public void apply() throws Exception {
            Assert.assertEquals("val1", table.get(new Get("key1", "col1")).getString("col1"));
        }
    });
    testAdminOp(bob, "truncate", 200, null);
    // verify that data is no longer there
    txExecutor.execute(new TransactionExecutor.Subroutine() {

        @Override
        public void apply() throws Exception {
            Assert.assertTrue(table.get(new Get("key1", "col1")).isEmpty());
        }
    });
    // check upgrade
    testAdminOp(bob, "upgrade", 200, null);
    // drop and check non-existence
    dsFramework.deleteInstance(bob);
    testAdminOp(bob, "exists", 404, null);
}
Also used : Table(co.cask.cdap.api.dataset.table.Table) TransactionAware(org.apache.tephra.TransactionAware) Get(co.cask.cdap.api.dataset.table.Get) DefaultTransactionExecutor(org.apache.tephra.DefaultTransactionExecutor) TransactionExecutor(org.apache.tephra.TransactionExecutor) DefaultTransactionExecutor(org.apache.tephra.DefaultTransactionExecutor) InMemoryTxSystemClient(org.apache.tephra.inmemory.InMemoryTxSystemClient) URISyntaxException(java.net.URISyntaxException) MalformedURLException(java.net.MalformedURLException) IOException(java.io.IOException) Put(co.cask.cdap.api.dataset.table.Put) Test(org.junit.Test)

Example 4 with InMemoryTxSystemClient

use of org.apache.tephra.inmemory.InMemoryTxSystemClient in project cdap by caskdata.

the class CubeDatasetTest method testTxRetryOnFailure.

@Test
public void testTxRetryOnFailure() throws Exception {
    // This test ensures that there's no non-transactional cache used in cube dataset. For that, it
    // 1) simulates transaction conflict for the first write to cube
    // 2) attempts to write again, writes successfully
    // 3) uses second cube instance to read the result
    //
    // In case there's a non-transactional cache used in cube, it would fill entity mappings in the first tx, and only
    // use them to write data. Hence, when reading - there will be no mapping in entity table to decode, as first tx
    // that wrote it is not visible (was aborted on conflict).
    Aggregation agg1 = new DefaultAggregation(ImmutableList.of("dim1", "dim2", "dim3"));
    int resolution = 1;
    Cube cube1 = getCubeInternal("concurrCube", new int[] { resolution }, ImmutableMap.of("agg1", agg1));
    Cube cube2 = getCubeInternal("concurrCube", new int[] { resolution }, ImmutableMap.of("agg1", agg1));
    Configuration txConf = HBaseConfiguration.create();
    TransactionManager txManager = new TransactionManager(txConf);
    txManager.startAndWait();
    try {
        TransactionSystemClient txClient = new InMemoryTxSystemClient(txManager);
        // 1) write and abort after commit to simlate conflict
        Transaction tx = txClient.startShort();
        ((TransactionAware) cube1).startTx(tx);
        writeInc(cube1, "metric1", 1, 1, "1", "1", "1");
        ((TransactionAware) cube1).commitTx();
        txClient.abort(tx);
        ((TransactionAware) cube1).rollbackTx();
        // 2) write successfully
        tx = txClient.startShort();
        ((TransactionAware) cube1).startTx(tx);
        writeInc(cube1, "metric1", 1, 1, "1", "1", "1");
        // let's pretend we had conflict and rollback it
        ((TransactionAware) cube1).commitTx();
        txClient.commit(tx);
        ((TransactionAware) cube1).postTxCommit();
        // 3) read using different cube instance
        tx = txClient.startShort();
        ((TransactionAware) cube2).startTx(tx);
        verifyCountQuery(cube2, 0, 2, resolution, "metric1", AggregationFunction.SUM, new HashMap<String, String>(), new ArrayList<String>(), ImmutableList.of(new TimeSeries("metric1", new HashMap<String, String>(), timeValues(1, 1))));
        // let's pretend we had conflict and rollback it
        ((TransactionAware) cube2).commitTx();
        txClient.commit(tx);
        ((TransactionAware) cube2).postTxCommit();
    } finally {
        txManager.stopAndWait();
    }
}
Also used : TimeSeries(co.cask.cdap.api.dataset.lib.cube.TimeSeries) Configuration(org.apache.hadoop.conf.Configuration) HBaseConfiguration(org.apache.hadoop.hbase.HBaseConfiguration) InMemoryTxSystemClient(org.apache.tephra.inmemory.InMemoryTxSystemClient) TransactionSystemClient(org.apache.tephra.TransactionSystemClient) Transaction(org.apache.tephra.Transaction) Cube(co.cask.cdap.api.dataset.lib.cube.Cube) TransactionManager(org.apache.tephra.TransactionManager) TransactionAware(org.apache.tephra.TransactionAware) Test(org.junit.Test)

Example 5 with InMemoryTxSystemClient

use of org.apache.tephra.inmemory.InMemoryTxSystemClient in project cdap by caskdata.

the class DynamicDatasetCacheTest method init.

@BeforeClass
public static void init() throws DatasetManagementException, IOException {
    dsFramework = dsFrameworkUtil.getFramework();
    dsFramework.addModule(NAMESPACE.datasetModule("testDataset"), new TestDatasetModule());
    dsFramework.addModule(NAMESPACE2.datasetModule("testDataset"), new TestDatasetModule());
    txClient = new InMemoryTxSystemClient(dsFrameworkUtil.getTxManager());
    dsFrameworkUtil.createInstance("testDataset", NAMESPACE.dataset("a"), DatasetProperties.EMPTY);
    dsFrameworkUtil.createInstance("testDataset", NAMESPACE.dataset("b"), DatasetProperties.EMPTY);
    dsFrameworkUtil.createInstance("testDataset", NAMESPACE.dataset("c"), DatasetProperties.EMPTY);
    dsFrameworkUtil.createInstance("testDataset", NAMESPACE2.dataset("a2"), DatasetProperties.EMPTY);
    dsFrameworkUtil.createInstance("testDataset", NAMESPACE2.dataset("c2"), DatasetProperties.EMPTY);
}
Also used : InMemoryTxSystemClient(org.apache.tephra.inmemory.InMemoryTxSystemClient) BeforeClass(org.junit.BeforeClass)

Aggregations

InMemoryTxSystemClient (org.apache.tephra.inmemory.InMemoryTxSystemClient)10 TransactionManager (org.apache.tephra.TransactionManager)6 Configuration (org.apache.hadoop.conf.Configuration)5 Test (org.junit.Test)5 PartitionedFileSet (co.cask.cdap.api.dataset.lib.PartitionedFileSet)2 Get (co.cask.cdap.api.dataset.table.Get)2 Put (co.cask.cdap.api.dataset.table.Put)2 Table (co.cask.cdap.api.dataset.table.Table)2 HBaseConfiguration (org.apache.hadoop.hbase.HBaseConfiguration)2 TransactionAware (org.apache.tephra.TransactionAware)2 Before (org.junit.Before)2 PartitionKey (co.cask.cdap.api.dataset.lib.PartitionKey)1 Cube (co.cask.cdap.api.dataset.lib.cube.Cube)1 TimeSeries (co.cask.cdap.api.dataset.lib.cube.TimeSeries)1 ConcurrentPartitionConsumer (co.cask.cdap.api.dataset.lib.partitioned.ConcurrentPartitionConsumer)1 PartitionConsumer (co.cask.cdap.api.dataset.lib.partitioned.PartitionConsumer)1 DatasetDefinitionRegistry (co.cask.cdap.api.dataset.module.DatasetDefinitionRegistry)1 DatasetModule (co.cask.cdap.api.dataset.module.DatasetModule)1 MetricsCollectionService (co.cask.cdap.api.metrics.MetricsCollectionService)1 EndpointStrategy (co.cask.cdap.common.discovery.EndpointStrategy)1