use of org.apache.storm.hive.bolt.mapper.JsonRecordHiveMapper in project storm by apache.
the class TestHiveBolt method testNoTickEmptyBatches.
@Test
public void testNoTickEmptyBatches() throws Exception {
JsonRecordHiveMapper mapper = new JsonRecordHiveMapper().withColumnFields(new Fields(colNames1)).withPartitionFields(new Fields(partNames));
HiveOptions hiveOptions = new HiveOptions(metaStoreURI, dbName, tblName, mapper).withTxnsPerBatch(2).withBatchSize(2);
bolt = new HiveBolt(hiveOptions);
bolt.prepare(config, null, new OutputCollector(collector));
//The tick should NOT cause any acks since the batch was empty except for acking itself
Tuple mockTick = MockTupleHelpers.mockTickTuple();
bolt.execute(mockTick);
verifyZeroInteractions(collector);
bolt.cleanup();
}
use of org.apache.storm.hive.bolt.mapper.JsonRecordHiveMapper in project storm by apache.
the class TestHiveBolt method testNoAcksUntilFlushed.
@Test
public void testNoAcksUntilFlushed() {
JsonRecordHiveMapper mapper = new JsonRecordHiveMapper().withColumnFields(new Fields(colNames1)).withPartitionFields(new Fields(partNames));
HiveOptions hiveOptions = new HiveOptions(metaStoreURI, dbName, tblName, mapper).withTxnsPerBatch(2).withBatchSize(2);
bolt = new HiveBolt(hiveOptions);
bolt.prepare(config, null, new OutputCollector(collector));
Tuple tuple1 = generateTestTuple(1, "SJC", "Sunnyvale", "CA");
Tuple tuple2 = generateTestTuple(2, "SFO", "San Jose", "CA");
bolt.execute(tuple1);
verifyZeroInteractions(collector);
bolt.execute(tuple2);
verify(collector).ack(tuple1);
verify(collector).ack(tuple2);
bolt.cleanup();
}
use of org.apache.storm.hive.bolt.mapper.JsonRecordHiveMapper in project storm by apache.
the class TestHiveBolt method testJsonWriter.
@Test
public void testJsonWriter() throws Exception {
// json record doesn't need columns to be in the same order
// as table in hive.
JsonRecordHiveMapper mapper = new JsonRecordHiveMapper().withColumnFields(new Fields(colNames1)).withPartitionFields(new Fields(partNames));
HiveOptions hiveOptions = new HiveOptions(metaStoreURI, dbName, tblName, mapper).withTxnsPerBatch(2).withBatchSize(1);
bolt = new HiveBolt(hiveOptions);
bolt.prepare(config, null, collector);
Tuple tuple1 = generateTestTuple(1, "SJC", "Sunnyvale", "CA");
//Tuple tuple2 = generateTestTuple(2,"SFO","San Jose","CA");
bolt.execute(tuple1);
verify(collector).ack(tuple1);
//bolt.execute(tuple2);
//verify(collector).ack(tuple2);
checkDataWritten(tblName, dbName, "1,SJC,Sunnyvale,CA");
bolt.cleanup();
}
use of org.apache.storm.hive.bolt.mapper.JsonRecordHiveMapper in project storm by apache.
the class TestHiveBolt method testTickTuple.
@Test
public void testTickTuple() {
JsonRecordHiveMapper mapper = new JsonRecordHiveMapper().withColumnFields(new Fields(colNames1)).withPartitionFields(new Fields(partNames));
HiveOptions hiveOptions = new HiveOptions(metaStoreURI, dbName, tblName, mapper).withTxnsPerBatch(2).withBatchSize(2);
bolt = new HiveBolt(hiveOptions);
bolt.prepare(config, null, new OutputCollector(collector));
Tuple tuple1 = generateTestTuple(1, "SJC", "Sunnyvale", "CA");
Tuple tuple2 = generateTestTuple(2, "SFO", "San Jose", "CA");
bolt.execute(tuple1);
//The tick should cause tuple1 to be ack'd
Tuple mockTick = MockTupleHelpers.mockTickTuple();
bolt.execute(mockTick);
verify(collector).ack(tuple1);
//The second tuple should NOT be ack'd because the batch should be cleared and this will be
//the first transaction in the new batch
bolt.execute(tuple2);
verify(collector, never()).ack(tuple2);
bolt.cleanup();
}
use of org.apache.storm.hive.bolt.mapper.JsonRecordHiveMapper in project storm by apache.
the class TestHiveBolt method testNoAcksIfFlushFails.
@Test
public void testNoAcksIfFlushFails() throws Exception {
JsonRecordHiveMapper mapper = new JsonRecordHiveMapper().withColumnFields(new Fields(colNames1)).withPartitionFields(new Fields(partNames));
HiveOptions hiveOptions = new HiveOptions(metaStoreURI, dbName, tblName, mapper).withTxnsPerBatch(2).withBatchSize(2);
HiveBolt spyBolt = Mockito.spy(new HiveBolt(hiveOptions));
//This forces a failure of all the flush attempts
doThrow(new InterruptedException()).when(spyBolt).flushAllWriters(true);
spyBolt.prepare(config, null, new OutputCollector(collector));
Tuple tuple1 = generateTestTuple(1, "SJC", "Sunnyvale", "CA");
Tuple tuple2 = generateTestTuple(2, "SFO", "San Jose", "CA");
spyBolt.execute(tuple1);
spyBolt.execute(tuple2);
verify(collector, never()).ack(tuple1);
verify(collector, never()).ack(tuple2);
spyBolt.cleanup();
}
Aggregations