use of org.apache.storm.tuple.Fields in project storm by apache.
the class TestHiveBolt method testWithByteArrayIdandMessage.
@Test
public void testWithByteArrayIdandMessage() throws Exception {
DelimitedRecordHiveMapper mapper = new DelimitedRecordHiveMapper().withColumnFields(new Fields(colNames)).withPartitionFields(new Fields(partNames));
HiveOptions hiveOptions = new HiveOptions(metaStoreURI, dbName, tblName, mapper).withTxnsPerBatch(2).withBatchSize(2);
bolt = new HiveBolt(hiveOptions);
bolt.prepare(config, null, collector);
Integer id = 100;
String msg = "test-123";
String city = "sunnyvale";
String state = "ca";
checkRecordCountInTable(tblName, dbName, 0);
Set<Tuple> tupleSet = new HashSet<Tuple>();
for (int i = 0; i < 4; i++) {
Tuple tuple = generateTestTuple(id, msg, city, state);
bolt.execute(tuple);
tupleSet.add(tuple);
}
for (Tuple t : tupleSet) verify(collector).ack(t);
checkRecordCountInTable(tblName, dbName, 4);
bolt.cleanup();
}
use of org.apache.storm.tuple.Fields in project storm by apache.
the class TridentJmsSpout method getOutputFields.
@Override
public Fields getOutputFields() {
OutputFieldsGetter fieldGetter = new OutputFieldsGetter();
tupleProducer.declareOutputFields(fieldGetter);
StreamInfo streamInfo = fieldGetter.getFieldsDeclaration().get(Utils.DEFAULT_STREAM_ID);
if (streamInfo == null) {
throw new IllegalArgumentException("Jms Tuple producer has not declared output fields for the default stream");
}
return new Fields(streamInfo.get_output_fields());
}
use of org.apache.storm.tuple.Fields in project storm by apache.
the class TestHiveBolt method testJsonWriter.
@Test
public void testJsonWriter() throws Exception {
// json record doesn't need columns to be in the same order
// as table in hive.
JsonRecordHiveMapper mapper = new JsonRecordHiveMapper().withColumnFields(new Fields(colNames1)).withPartitionFields(new Fields(partNames));
HiveOptions hiveOptions = new HiveOptions(metaStoreURI, dbName, tblName, mapper).withTxnsPerBatch(2).withBatchSize(1);
bolt = new HiveBolt(hiveOptions);
bolt.prepare(config, null, collector);
Tuple tuple1 = generateTestTuple(1, "SJC", "Sunnyvale", "CA");
//Tuple tuple2 = generateTestTuple(2,"SFO","San Jose","CA");
bolt.execute(tuple1);
verify(collector).ack(tuple1);
//bolt.execute(tuple2);
//verify(collector).ack(tuple2);
checkDataWritten(tblName, dbName, "1,SJC,Sunnyvale,CA");
bolt.cleanup();
}
use of org.apache.storm.tuple.Fields in project storm by apache.
the class TestHiveWriter method testWriteBasic.
@Test
public void testWriteBasic() throws Exception {
DelimitedRecordHiveMapper mapper = new DelimitedRecordHiveMapper().withColumnFields(new Fields(colNames)).withPartitionFields(new Fields(partNames));
HiveEndPoint endPoint = new HiveEndPoint(metaStoreURI, dbName, tblName, Arrays.asList(partitionVals));
HiveWriter writer = new HiveWriter(endPoint, 10, true, timeout, callTimeoutPool, mapper, ugi);
writeTuples(writer, mapper, 3);
writer.flush(false);
writer.close();
checkRecordCountInTable(dbName, tblName, 3);
}
use of org.apache.storm.tuple.Fields in project storm by apache.
the class TestHiveWriter method testInstantiate.
@Test
public void testInstantiate() throws Exception {
DelimitedRecordHiveMapper mapper = new DelimitedRecordHiveMapper().withColumnFields(new Fields(colNames)).withPartitionFields(new Fields(partNames));
HiveEndPoint endPoint = new HiveEndPoint(metaStoreURI, dbName, tblName, Arrays.asList(partitionVals));
HiveWriter writer = new HiveWriter(endPoint, 10, true, timeout, callTimeoutPool, mapper, ugi);
writer.close();
}
Aggregations