Search in sources :

Example 1 with Transaction

use of org.apache.hive.hcatalog.streaming.mutate.client.Transaction in project hive by apache.

the class TestMutations method testTransactionBatchAbort.

@Test
public void testTransactionBatchAbort() throws Exception {
    Table table = partitionedTableBuilder.addPartition(ASIA_INDIA).create(metaStoreClient);
    MutatorClient client = new MutatorClientBuilder().addSinkTable(table.getDbName(), table.getTableName(), true).metaStoreUri(metaStoreUri).build();
    client.connect();
    Transaction transaction = client.newTransaction();
    List<AcidTable> destinations = client.getTables();
    transaction.begin();
    MutatorFactory mutatorFactory = new ReflectiveMutatorFactory(conf, MutableRecord.class, RECORD_ID_COLUMN, BUCKET_COLUMN_INDEXES);
    MutatorCoordinator coordinator = new MutatorCoordinatorBuilder().metaStoreUri(metaStoreUri).table(destinations.get(0)).mutatorFactory(mutatorFactory).build();
    BucketIdResolver bucketIdResolver = mutatorFactory.newBucketIdResolver(destinations.get(0).getTotalBuckets());
    MutableRecord record1 = (MutableRecord) bucketIdResolver.attachBucketIdToRecord(new MutableRecord(1, "Hello streaming"));
    MutableRecord record2 = (MutableRecord) bucketIdResolver.attachBucketIdToRecord(new MutableRecord(2, "Welcome to streaming"));
    coordinator.insert(ASIA_INDIA, record1);
    coordinator.insert(ASIA_INDIA, record2);
    coordinator.close();
    transaction.abort();
    assertThat(transaction.getState(), is(ABORTED));
    client.close();
    StreamingAssert streamingAssertions = assertionFactory.newStreamingAssert(table, ASIA_INDIA);
    streamingAssertions.assertNothingWritten();
}
Also used : AcidTable(org.apache.hive.hcatalog.streaming.mutate.client.AcidTable) Table(org.apache.hadoop.hive.metastore.api.Table) AcidTable(org.apache.hive.hcatalog.streaming.mutate.client.AcidTable) MutatorCoordinator(org.apache.hive.hcatalog.streaming.mutate.worker.MutatorCoordinator) MutatorFactory(org.apache.hive.hcatalog.streaming.mutate.worker.MutatorFactory) Transaction(org.apache.hive.hcatalog.streaming.mutate.client.Transaction) MutatorCoordinatorBuilder(org.apache.hive.hcatalog.streaming.mutate.worker.MutatorCoordinatorBuilder) BucketIdResolver(org.apache.hive.hcatalog.streaming.mutate.worker.BucketIdResolver) MutatorClient(org.apache.hive.hcatalog.streaming.mutate.client.MutatorClient) MutatorClientBuilder(org.apache.hive.hcatalog.streaming.mutate.client.MutatorClientBuilder) Test(org.junit.Test)

Example 2 with Transaction

use of org.apache.hive.hcatalog.streaming.mutate.client.Transaction in project hive by apache.

the class TestMutations method testTransactionBatchEmptyCommitPartitioned.

@Test
public void testTransactionBatchEmptyCommitPartitioned() throws Exception {
    Table table = partitionedTableBuilder.addPartition(ASIA_INDIA).create(metaStoreClient);
    MutatorClient client = new MutatorClientBuilder().addSinkTable(table.getDbName(), table.getTableName(), true).metaStoreUri(metaStoreUri).build();
    client.connect();
    Transaction transaction = client.newTransaction();
    transaction.begin();
    transaction.commit();
    assertThat(transaction.getState(), is(COMMITTED));
    client.close();
}
Also used : AcidTable(org.apache.hive.hcatalog.streaming.mutate.client.AcidTable) Table(org.apache.hadoop.hive.metastore.api.Table) Transaction(org.apache.hive.hcatalog.streaming.mutate.client.Transaction) MutatorClient(org.apache.hive.hcatalog.streaming.mutate.client.MutatorClient) MutatorClientBuilder(org.apache.hive.hcatalog.streaming.mutate.client.MutatorClientBuilder) Test(org.junit.Test)

Example 3 with Transaction

use of org.apache.hive.hcatalog.streaming.mutate.client.Transaction in project hive by apache.

the class TestMutations method testTransactionBatchEmptyAbortPartitioned.

@Test
public void testTransactionBatchEmptyAbortPartitioned() throws Exception {
    Table table = partitionedTableBuilder.addPartition(ASIA_INDIA).create(metaStoreClient);
    MutatorClient client = new MutatorClientBuilder().addSinkTable(table.getDbName(), table.getTableName(), true).metaStoreUri(metaStoreUri).build();
    client.connect();
    Transaction transaction = client.newTransaction();
    List<AcidTable> destinations = client.getTables();
    transaction.begin();
    MutatorFactory mutatorFactory = new ReflectiveMutatorFactory(conf, MutableRecord.class, RECORD_ID_COLUMN, BUCKET_COLUMN_INDEXES);
    MutatorCoordinator coordinator = new MutatorCoordinatorBuilder().metaStoreUri(metaStoreUri).table(destinations.get(0)).mutatorFactory(mutatorFactory).build();
    coordinator.close();
    transaction.abort();
    assertThat(transaction.getState(), is(ABORTED));
    client.close();
}
Also used : AcidTable(org.apache.hive.hcatalog.streaming.mutate.client.AcidTable) Table(org.apache.hadoop.hive.metastore.api.Table) MutatorFactory(org.apache.hive.hcatalog.streaming.mutate.worker.MutatorFactory) Transaction(org.apache.hive.hcatalog.streaming.mutate.client.Transaction) AcidTable(org.apache.hive.hcatalog.streaming.mutate.client.AcidTable) MutatorCoordinatorBuilder(org.apache.hive.hcatalog.streaming.mutate.worker.MutatorCoordinatorBuilder) MutatorCoordinator(org.apache.hive.hcatalog.streaming.mutate.worker.MutatorCoordinator) MutatorClient(org.apache.hive.hcatalog.streaming.mutate.client.MutatorClient) MutatorClientBuilder(org.apache.hive.hcatalog.streaming.mutate.client.MutatorClientBuilder) Test(org.junit.Test)

Example 4 with Transaction

use of org.apache.hive.hcatalog.streaming.mutate.client.Transaction in project hive by apache.

the class ExampleUseCase method example.

/* This is an illustration, not a functioning example. */
public void example() throws Exception {
    // CLIENT/TOOL END
    // 
    // Singleton instance in the job client
    // Create a client to manage our transaction
    MutatorClient client = new MutatorClientBuilder().addSinkTable(databaseName, tableName, createPartitions).metaStoreUri(metaStoreUri).build();
    // Get the transaction
    Transaction transaction = client.newTransaction();
    // Get serializable details of the destination tables
    List<AcidTable> tables = client.getTables();
    transaction.begin();
    // CLUSTER / WORKER END
    // 
    // Job submitted to the cluster
    // 
    BucketIdResolver bucketIdResolver = mutatorFactory.newBucketIdResolver(tables.get(0).getTotalBuckets());
    record1 = bucketIdResolver.attachBucketIdToRecord(record1);
    // --------------------------------------------------------------
    // DATA SHOULD GET SORTED BY YOUR ETL/MERGE PROCESS HERE
    // 
    // Group the data by (partitionValues, ROW__ID.bucketId)
    // Order the groups by (ROW__ID.writeId, ROW__ID.rowId)
    // --------------------------------------------------------------
    // One of these runs at the output of each reducer
    // 
    MutatorCoordinator coordinator = new MutatorCoordinatorBuilder().metaStoreUri(metaStoreUri).table(tables.get(0)).mutatorFactory(mutatorFactory).build();
    coordinator.insert(partitionValues1, record1);
    coordinator.update(partitionValues2, record2);
    coordinator.delete(partitionValues3, record3);
    coordinator.close();
    // CLIENT/TOOL END
    // 
    // The tasks have completed, control is back at the tool
    transaction.commit();
    client.close();
}
Also used : Transaction(org.apache.hive.hcatalog.streaming.mutate.client.Transaction) AcidTable(org.apache.hive.hcatalog.streaming.mutate.client.AcidTable) MutatorCoordinatorBuilder(org.apache.hive.hcatalog.streaming.mutate.worker.MutatorCoordinatorBuilder) MutatorCoordinator(org.apache.hive.hcatalog.streaming.mutate.worker.MutatorCoordinator) BucketIdResolver(org.apache.hive.hcatalog.streaming.mutate.worker.BucketIdResolver) MutatorClient(org.apache.hive.hcatalog.streaming.mutate.client.MutatorClient) MutatorClientBuilder(org.apache.hive.hcatalog.streaming.mutate.client.MutatorClientBuilder)

Example 5 with Transaction

use of org.apache.hive.hcatalog.streaming.mutate.client.Transaction in project hive by apache.

the class TestMutations method testMulti.

@Test
public void testMulti() throws Exception {
    Table table = partitionedTableBuilder.addPartition(ASIA_INDIA).create(metaStoreClient);
    MutatorClient client = new MutatorClientBuilder().addSinkTable(table.getDbName(), table.getTableName(), true).metaStoreUri(metaStoreUri).build();
    client.connect();
    Transaction transaction = client.newTransaction();
    List<AcidTable> destinations = client.getTables();
    transaction.begin();
    MutatorFactory mutatorFactory = new ReflectiveMutatorFactory(conf, MutableRecord.class, RECORD_ID_COLUMN, BUCKET_COLUMN_INDEXES);
    MutatorCoordinator coordinator = new MutatorCoordinatorBuilder().metaStoreUri(metaStoreUri).table(destinations.get(0)).mutatorFactory(mutatorFactory).build();
    BucketIdResolver bucketIdResolver = mutatorFactory.newBucketIdResolver(destinations.get(0).getTotalBuckets());
    MutableRecord asiaIndiaRecord1 = (MutableRecord) bucketIdResolver.attachBucketIdToRecord(new MutableRecord(1, "Hello streaming"));
    MutableRecord europeUkRecord1 = (MutableRecord) bucketIdResolver.attachBucketIdToRecord(new MutableRecord(2, "Hello streaming"));
    MutableRecord europeFranceRecord1 = (MutableRecord) bucketIdResolver.attachBucketIdToRecord(new MutableRecord(3, "Hello streaming"));
    MutableRecord europeFranceRecord2 = (MutableRecord) bucketIdResolver.attachBucketIdToRecord(new MutableRecord(4, "Bonjour streaming"));
    coordinator.insert(ASIA_INDIA, asiaIndiaRecord1);
    coordinator.insert(EUROPE_UK, europeUkRecord1);
    coordinator.insert(EUROPE_FRANCE, europeFranceRecord1);
    coordinator.insert(EUROPE_FRANCE, europeFranceRecord2);
    coordinator.close();
    transaction.commit();
    // ASIA_INDIA
    StreamingAssert streamingAssertions = assertionFactory.newStreamingAssert(table, ASIA_INDIA);
    streamingAssertions.assertMinWriteId(1L);
    streamingAssertions.assertMaxWriteId(1L);
    streamingAssertions.assertExpectedFileCount(1);
    List<Record> readRecords = streamingAssertions.readRecords();
    assertThat(readRecords.size(), is(1));
    assertThat(readRecords.get(0).getRow(), is("{1, Hello streaming}"));
    assertThat(readRecords.get(0).getRecordIdentifier(), is(new RecordIdentifier(1L, encodeBucket(0), 0L)));
    // EUROPE_UK
    streamingAssertions = assertionFactory.newStreamingAssert(table, EUROPE_UK);
    streamingAssertions.assertMinWriteId(1L);
    streamingAssertions.assertMaxWriteId(1L);
    streamingAssertions.assertExpectedFileCount(1);
    readRecords = streamingAssertions.readRecords();
    assertThat(readRecords.size(), is(1));
    assertThat(readRecords.get(0).getRow(), is("{2, Hello streaming}"));
    assertThat(readRecords.get(0).getRecordIdentifier(), is(new RecordIdentifier(1L, encodeBucket(0), 0L)));
    // EUROPE_FRANCE
    streamingAssertions = assertionFactory.newStreamingAssert(table, EUROPE_FRANCE);
    streamingAssertions.assertMinWriteId(1L);
    streamingAssertions.assertMaxWriteId(1L);
    streamingAssertions.assertExpectedFileCount(1);
    readRecords = streamingAssertions.readRecords();
    assertThat(readRecords.size(), is(2));
    assertThat(readRecords.get(0).getRow(), is("{3, Hello streaming}"));
    assertThat(readRecords.get(0).getRecordIdentifier(), is(new RecordIdentifier(1L, encodeBucket(0), 0L)));
    assertThat(readRecords.get(1).getRow(), is("{4, Bonjour streaming}"));
    assertThat(readRecords.get(1).getRecordIdentifier(), is(new RecordIdentifier(1L, encodeBucket(0), 1L)));
    client.close();
}
Also used : AcidTable(org.apache.hive.hcatalog.streaming.mutate.client.AcidTable) Table(org.apache.hadoop.hive.metastore.api.Table) AcidTable(org.apache.hive.hcatalog.streaming.mutate.client.AcidTable) MutatorCoordinator(org.apache.hive.hcatalog.streaming.mutate.worker.MutatorCoordinator) RecordIdentifier(org.apache.hadoop.hive.ql.io.RecordIdentifier) MutatorFactory(org.apache.hive.hcatalog.streaming.mutate.worker.MutatorFactory) Transaction(org.apache.hive.hcatalog.streaming.mutate.client.Transaction) MutatorCoordinatorBuilder(org.apache.hive.hcatalog.streaming.mutate.worker.MutatorCoordinatorBuilder) BucketIdResolver(org.apache.hive.hcatalog.streaming.mutate.worker.BucketIdResolver) Record(org.apache.hive.hcatalog.streaming.mutate.StreamingAssert.Record) MutatorClient(org.apache.hive.hcatalog.streaming.mutate.client.MutatorClient) MutatorClientBuilder(org.apache.hive.hcatalog.streaming.mutate.client.MutatorClientBuilder) Test(org.junit.Test)

Aggregations

AcidTable (org.apache.hive.hcatalog.streaming.mutate.client.AcidTable)10 MutatorClient (org.apache.hive.hcatalog.streaming.mutate.client.MutatorClient)10 MutatorClientBuilder (org.apache.hive.hcatalog.streaming.mutate.client.MutatorClientBuilder)10 Transaction (org.apache.hive.hcatalog.streaming.mutate.client.Transaction)10 Table (org.apache.hadoop.hive.metastore.api.Table)9 Test (org.junit.Test)9 MutatorCoordinator (org.apache.hive.hcatalog.streaming.mutate.worker.MutatorCoordinator)8 MutatorCoordinatorBuilder (org.apache.hive.hcatalog.streaming.mutate.worker.MutatorCoordinatorBuilder)8 MutatorFactory (org.apache.hive.hcatalog.streaming.mutate.worker.MutatorFactory)7 BucketIdResolver (org.apache.hive.hcatalog.streaming.mutate.worker.BucketIdResolver)6 RecordIdentifier (org.apache.hadoop.hive.ql.io.RecordIdentifier)4 Record (org.apache.hive.hcatalog.streaming.mutate.StreamingAssert.Record)4