Search in sources :

Example 6 with Tuple

use of org.apache.storm.tuple.Tuple in project storm by apache.

the class EsPercolateBoltTest method testEsPercolateBolt.

@Test
public void testEsPercolateBolt() throws Exception {
    String source = "{\"user\":\"user1\"}";
    String index = "index1";
    String type = ".percolator";
    node.client().prepareIndex("index1", ".percolator").setId("1").setSource("{\"query\":{\"match\":{\"user\":\"user1\"}}}").execute().actionGet();
    Tuple tuple = EsTestUtil.generateTestTuple(source, index, type, null);
    bolt.execute(tuple);
    verify(outputCollector).ack(tuple);
    verify(outputCollector).emit(new Values(source, any(PercolateResponse.Match.class)));
}
Also used : Values(org.apache.storm.tuple.Values) Tuple(org.apache.storm.tuple.Tuple) PercolateResponse(org.elasticsearch.action.percolate.PercolateResponse) IntegrationTest(org.apache.storm.testing.IntegrationTest) Test(org.junit.Test)

Example 7 with Tuple

use of org.apache.storm.tuple.Tuple in project storm by apache.

the class TestHiveBolt method testWithoutPartitions.

@Test
public void testWithoutPartitions() throws Exception {
    HiveSetupUtil.dropDB(conf, dbName1);
    HiveSetupUtil.createDbAndTable(conf, dbName1, tblName1, null, colNames, colTypes, null, dbLocation);
    DelimitedRecordHiveMapper mapper = new DelimitedRecordHiveMapper().withColumnFields(new Fields(colNames));
    HiveOptions hiveOptions = new HiveOptions(metaStoreURI, dbName1, tblName1, mapper).withTxnsPerBatch(2).withBatchSize(2).withAutoCreatePartitions(false);
    bolt = new HiveBolt(hiveOptions);
    bolt.prepare(config, null, collector);
    Integer id = 100;
    String msg = "test-123";
    String city = "sunnyvale";
    String state = "ca";
    checkRecordCountInTable(tblName1, dbName1, 0);
    Set<Tuple> tupleSet = new HashSet<Tuple>();
    for (int i = 0; i < 4; i++) {
        Tuple tuple = generateTestTuple(id, msg, city, state);
        bolt.execute(tuple);
        tupleSet.add(tuple);
    }
    for (Tuple t : tupleSet) verify(collector).ack(t);
    bolt.cleanup();
    checkRecordCountInTable(tblName1, dbName1, 4);
}
Also used : Fields(org.apache.storm.tuple.Fields) DelimitedRecordHiveMapper(org.apache.storm.hive.bolt.mapper.DelimitedRecordHiveMapper) HiveOptions(org.apache.storm.hive.common.HiveOptions) Tuple(org.apache.storm.tuple.Tuple) HashSet(java.util.HashSet) Test(org.junit.Test)

Example 8 with Tuple

use of org.apache.storm.tuple.Tuple in project storm by apache.

the class TestHiveBolt method testNoTickEmptyBatches.

@Test
public void testNoTickEmptyBatches() throws Exception {
    JsonRecordHiveMapper mapper = new JsonRecordHiveMapper().withColumnFields(new Fields(colNames1)).withPartitionFields(new Fields(partNames));
    HiveOptions hiveOptions = new HiveOptions(metaStoreURI, dbName, tblName, mapper).withTxnsPerBatch(2).withBatchSize(2);
    bolt = new HiveBolt(hiveOptions);
    bolt.prepare(config, null, new OutputCollector(collector));
    //The tick should NOT cause any acks since the batch was empty except for acking itself
    Tuple mockTick = MockTupleHelpers.mockTickTuple();
    bolt.execute(mockTick);
    verifyZeroInteractions(collector);
    bolt.cleanup();
}
Also used : JsonRecordHiveMapper(org.apache.storm.hive.bolt.mapper.JsonRecordHiveMapper) OutputCollector(org.apache.storm.task.OutputCollector) Fields(org.apache.storm.tuple.Fields) HiveOptions(org.apache.storm.hive.common.HiveOptions) Tuple(org.apache.storm.tuple.Tuple) Test(org.junit.Test)

Example 9 with Tuple

use of org.apache.storm.tuple.Tuple in project storm by apache.

the class TestHiveBolt method testNoAcksUntilFlushed.

@Test
public void testNoAcksUntilFlushed() {
    JsonRecordHiveMapper mapper = new JsonRecordHiveMapper().withColumnFields(new Fields(colNames1)).withPartitionFields(new Fields(partNames));
    HiveOptions hiveOptions = new HiveOptions(metaStoreURI, dbName, tblName, mapper).withTxnsPerBatch(2).withBatchSize(2);
    bolt = new HiveBolt(hiveOptions);
    bolt.prepare(config, null, new OutputCollector(collector));
    Tuple tuple1 = generateTestTuple(1, "SJC", "Sunnyvale", "CA");
    Tuple tuple2 = generateTestTuple(2, "SFO", "San Jose", "CA");
    bolt.execute(tuple1);
    verifyZeroInteractions(collector);
    bolt.execute(tuple2);
    verify(collector).ack(tuple1);
    verify(collector).ack(tuple2);
    bolt.cleanup();
}
Also used : JsonRecordHiveMapper(org.apache.storm.hive.bolt.mapper.JsonRecordHiveMapper) OutputCollector(org.apache.storm.task.OutputCollector) Fields(org.apache.storm.tuple.Fields) HiveOptions(org.apache.storm.hive.common.HiveOptions) Tuple(org.apache.storm.tuple.Tuple) Test(org.junit.Test)

Example 10 with Tuple

use of org.apache.storm.tuple.Tuple in project storm by apache.

the class TestHiveBolt method testWithByteArrayIdandMessage.

@Test
public void testWithByteArrayIdandMessage() throws Exception {
    DelimitedRecordHiveMapper mapper = new DelimitedRecordHiveMapper().withColumnFields(new Fields(colNames)).withPartitionFields(new Fields(partNames));
    HiveOptions hiveOptions = new HiveOptions(metaStoreURI, dbName, tblName, mapper).withTxnsPerBatch(2).withBatchSize(2);
    bolt = new HiveBolt(hiveOptions);
    bolt.prepare(config, null, collector);
    Integer id = 100;
    String msg = "test-123";
    String city = "sunnyvale";
    String state = "ca";
    checkRecordCountInTable(tblName, dbName, 0);
    Set<Tuple> tupleSet = new HashSet<Tuple>();
    for (int i = 0; i < 4; i++) {
        Tuple tuple = generateTestTuple(id, msg, city, state);
        bolt.execute(tuple);
        tupleSet.add(tuple);
    }
    for (Tuple t : tupleSet) verify(collector).ack(t);
    checkRecordCountInTable(tblName, dbName, 4);
    bolt.cleanup();
}
Also used : Fields(org.apache.storm.tuple.Fields) DelimitedRecordHiveMapper(org.apache.storm.hive.bolt.mapper.DelimitedRecordHiveMapper) HiveOptions(org.apache.storm.hive.common.HiveOptions) Tuple(org.apache.storm.tuple.Tuple) HashSet(java.util.HashSet) Test(org.junit.Test)

Aggregations

Tuple (org.apache.storm.tuple.Tuple)85 Test (org.junit.Test)30 Fields (org.apache.storm.tuple.Fields)13 OutputCollector (org.apache.storm.task.OutputCollector)11 Values (org.apache.storm.tuple.Values)11 ArrayList (java.util.ArrayList)10 HiveOptions (org.apache.storm.hive.common.HiveOptions)10 TupleWindow (org.apache.storm.windowing.TupleWindow)9 HashMap (java.util.HashMap)7 Test (org.testng.annotations.Test)7 GlobalStreamId (org.apache.storm.generated.GlobalStreamId)6 DelimitedRecordHiveMapper (org.apache.storm.hive.bolt.mapper.DelimitedRecordHiveMapper)6 HashSet (java.util.HashSet)5 JsonRecordHiveMapper (org.apache.storm.hive.bolt.mapper.JsonRecordHiveMapper)5 TopologyContext (org.apache.storm.task.TopologyContext)5 TupleImpl (org.apache.storm.tuple.TupleImpl)5 BasicOutputCollector (org.apache.storm.topology.BasicOutputCollector)4 Map (java.util.Map)3 Callback (org.apache.kafka.clients.producer.Callback)3 ProducerRecord (org.apache.kafka.clients.producer.ProducerRecord)3