Search in sources :

Example 6 with TopologyContext

use of org.apache.storm.task.TopologyContext in project nifi by apache.

the class TestNiFiBolt method testTickTupleWhenExceedingBatchInterval.

@Test
public void testTickTupleWhenExceedingBatchInterval() throws InterruptedException {
    final int batchInterval = 1;
    final NiFiBolt bolt = new TestableNiFiBolt(siteToSiteClientConfig, niFiDataPacketBuilder, tickFrequency).withBatchInterval(batchInterval);
    // prepare the bolt
    Map conf = mock(Map.class);
    TopologyContext context = mock(TopologyContext.class);
    OutputCollector collector = mock(OutputCollector.class);
    bolt.prepare(conf, context, collector);
    // process a regular tuple
    Tuple dataTuple = MockTupleHelpers.mockTuple("nifi", "nifi");
    bolt.execute(dataTuple);
    // sleep so we pass the batch interval
    Thread.sleep(batchInterval + 1000);
    // process a tick tuple
    Tuple tickTuple = MockTupleHelpers.mockTickTuple();
    bolt.execute(tickTuple);
    // should have produced one data packet and acked it
    verify(niFiDataPacketBuilder, times(1)).createNiFiDataPacket(eq(dataTuple));
    verify(collector, times(1)).ack(eq(dataTuple));
}
Also used : OutputCollector(org.apache.storm.task.OutputCollector) TopologyContext(org.apache.storm.task.TopologyContext) HashMap(java.util.HashMap) Map(java.util.Map) Tuple(org.apache.storm.tuple.Tuple) Test(org.junit.Test)

Example 7 with TopologyContext

use of org.apache.storm.task.TopologyContext in project nifi by apache.

the class TestNiFiBolt method testBatchSize.

@Test
public void testBatchSize() {
    final int batchSize = 3;
    final NiFiBolt bolt = new TestableNiFiBolt(siteToSiteClientConfig, niFiDataPacketBuilder, tickFrequency).withBatchSize(batchSize);
    // prepare the bolt
    Map conf = mock(Map.class);
    TopologyContext context = mock(TopologyContext.class);
    OutputCollector collector = mock(OutputCollector.class);
    bolt.prepare(conf, context, collector);
    // process a regular tuple, haven't hit batch size yet
    Tuple dataTuple1 = MockTupleHelpers.mockTuple("nifi", "nifi");
    bolt.execute(dataTuple1);
    verifyZeroInteractions(niFiDataPacketBuilder);
    // process a regular tuple, haven't hit batch size yet
    Tuple dataTuple2 = MockTupleHelpers.mockTuple("nifi", "nifi");
    bolt.execute(dataTuple2);
    verifyZeroInteractions(niFiDataPacketBuilder);
    // process a regular tuple, triggers batch size
    Tuple dataTuple3 = MockTupleHelpers.mockTuple("nifi", "nifi");
    bolt.execute(dataTuple3);
    verify(niFiDataPacketBuilder, times(batchSize)).createNiFiDataPacket(any(Tuple.class));
    verify(collector, times(batchSize)).ack(any(Tuple.class));
}
Also used : OutputCollector(org.apache.storm.task.OutputCollector) TopologyContext(org.apache.storm.task.TopologyContext) HashMap(java.util.HashMap) Map(java.util.Map) Tuple(org.apache.storm.tuple.Tuple) Test(org.junit.Test)

Example 8 with TopologyContext

use of org.apache.storm.task.TopologyContext in project nifi by apache.

the class TestNiFiBolt method testFailure.

@Test
public void testFailure() throws IOException {
    final int batchSize = 3;
    final NiFiBolt bolt = new TestableNiFiBolt(siteToSiteClientConfig, niFiDataPacketBuilder, tickFrequency).withBatchSize(batchSize);
    when(((TestableNiFiBolt) bolt).transaction.complete()).thenThrow(new RuntimeException("Could not complete transaction"));
    // prepare the bolt
    Map conf = mock(Map.class);
    TopologyContext context = mock(TopologyContext.class);
    OutputCollector collector = mock(OutputCollector.class);
    bolt.prepare(conf, context, collector);
    // process a regular tuple, haven't hit batch size yet
    Tuple dataTuple1 = MockTupleHelpers.mockTuple("nifi", "nifi");
    bolt.execute(dataTuple1);
    verifyZeroInteractions(niFiDataPacketBuilder);
    // process a regular tuple, haven't hit batch size yet
    Tuple dataTuple2 = MockTupleHelpers.mockTuple("nifi", "nifi");
    bolt.execute(dataTuple2);
    verifyZeroInteractions(niFiDataPacketBuilder);
    // process a regular tuple, triggers batch size
    Tuple dataTuple3 = MockTupleHelpers.mockTuple("nifi", "nifi");
    bolt.execute(dataTuple3);
    verify(niFiDataPacketBuilder, times(batchSize)).createNiFiDataPacket(any(Tuple.class));
    verify(collector, times(batchSize)).fail(any(Tuple.class));
}
Also used : OutputCollector(org.apache.storm.task.OutputCollector) TopologyContext(org.apache.storm.task.TopologyContext) HashMap(java.util.HashMap) Map(java.util.Map) Tuple(org.apache.storm.tuple.Tuple) Test(org.junit.Test)

Example 9 with TopologyContext

use of org.apache.storm.task.TopologyContext in project metron by apache.

the class BulkMessageWriterBoltTest method testFlushOnBatchSize.

@Test
public void testFlushOnBatchSize() throws Exception {
    BulkMessageWriterBolt bulkMessageWriterBolt = new BulkMessageWriterBolt("zookeeperUrl").withBulkMessageWriter(bulkMessageWriter).withMessageGetter(MessageGetters.JSON_FROM_FIELD.name()).withMessageGetterField("message");
    bulkMessageWriterBolt.setCuratorFramework(client);
    bulkMessageWriterBolt.setZKCache(cache);
    bulkMessageWriterBolt.getConfigurations().updateSensorIndexingConfig(sensorType, new FileInputStream(sampleSensorIndexingConfigPath));
    bulkMessageWriterBolt.declareOutputFields(declarer);
    verify(declarer, times(1)).declareStream(eq("error"), argThat(new FieldsMatcher("message")));
    Map stormConf = new HashMap();
    doThrow(new Exception()).when(bulkMessageWriter).init(eq(stormConf), any(TopologyContext.class), any(WriterConfiguration.class));
    try {
        bulkMessageWriterBolt.prepare(stormConf, topologyContext, outputCollector);
        fail("A runtime exception should be thrown when bulkMessageWriter.init throws an exception");
    } catch (RuntimeException e) {
    }
    reset(bulkMessageWriter);
    when(bulkMessageWriter.getName()).thenReturn("hdfs");
    bulkMessageWriterBolt.prepare(stormConf, topologyContext, outputCollector);
    verify(bulkMessageWriter, times(1)).init(eq(stormConf), any(TopologyContext.class), any(WriterConfiguration.class));
    tupleList = new ArrayList<>();
    messageList = new ArrayList<>();
    for (int i = 0; i < 4; i++) {
        when(tuple.getValueByField("message")).thenReturn(fullMessageList.get(i));
        tupleList.add(tuple);
        messageList.add(fullMessageList.get(i));
        bulkMessageWriterBolt.execute(tuple);
        verify(bulkMessageWriter, times(0)).write(eq(sensorType), any(WriterConfiguration.class), eq(tupleList), eq(messageList));
    }
    when(tuple.getValueByField("message")).thenReturn(fullMessageList.get(4));
    tupleList.add(tuple);
    messageList.add(fullMessageList.get(4));
    BulkWriterResponse response = new BulkWriterResponse();
    response.addAllSuccesses(tupleList);
    when(bulkMessageWriter.write(eq(sensorType), any(WriterConfiguration.class), eq(tupleList), argThat(new MessageListMatcher(messageList)))).thenReturn(response);
    bulkMessageWriterBolt.execute(tuple);
    verify(bulkMessageWriter, times(1)).write(eq(sensorType), any(WriterConfiguration.class), eq(tupleList), argThat(new MessageListMatcher(messageList)));
    verify(outputCollector, times(5)).ack(tuple);
    reset(outputCollector);
    doThrow(new Exception()).when(bulkMessageWriter).write(eq(sensorType), any(WriterConfiguration.class), Matchers.anyListOf(Tuple.class), Matchers.anyListOf(JSONObject.class));
    when(tuple.getValueByField("message")).thenReturn(fullMessageList.get(0));
    UnitTestHelper.setLog4jLevel(BulkWriterComponent.class, Level.FATAL);
    for (int i = 0; i < 5; i++) {
        bulkMessageWriterBolt.execute(tuple);
    }
    UnitTestHelper.setLog4jLevel(BulkWriterComponent.class, Level.ERROR);
    verify(outputCollector, times(5)).ack(tuple);
    verify(outputCollector, times(1)).emit(eq(Constants.ERROR_STREAM), any(Values.class));
    verify(outputCollector, times(1)).reportError(any(Throwable.class));
}
Also used : HashMap(java.util.HashMap) Values(org.apache.storm.tuple.Values) WriterConfiguration(org.apache.metron.common.configuration.writer.WriterConfiguration) FileInputStream(java.io.FileInputStream) ParseException(org.json.simple.parser.ParseException) BulkMessageWriterBolt(org.apache.metron.writer.bolt.BulkMessageWriterBolt) JSONObject(org.json.simple.JSONObject) TopologyContext(org.apache.storm.task.TopologyContext) HashMap(java.util.HashMap) Map(java.util.Map) BulkWriterResponse(org.apache.metron.common.writer.BulkWriterResponse) Tuple(org.apache.storm.tuple.Tuple) BaseEnrichmentBoltTest(org.apache.metron.test.bolt.BaseEnrichmentBoltTest) Test(org.junit.Test)

Example 10 with TopologyContext

use of org.apache.storm.task.TopologyContext in project metron by apache.

the class BulkMessageWriterBoltTest method testFlushOnTickTuple.

@Test
public void testFlushOnTickTuple() throws Exception {
    FakeClock clock = new FakeClock();
    BulkMessageWriterBolt bulkMessageWriterBolt = new BulkMessageWriterBolt("zookeeperUrl").withBulkMessageWriter(bulkMessageWriter).withMessageGetter(MessageGetters.JSON_FROM_FIELD.name()).withMessageGetterField("message");
    bulkMessageWriterBolt.setCuratorFramework(client);
    bulkMessageWriterBolt.setZKCache(cache);
    bulkMessageWriterBolt.getConfigurations().updateSensorIndexingConfig(sensorType, new FileInputStream(sampleSensorIndexingConfigPath));
    bulkMessageWriterBolt.declareOutputFields(declarer);
    verify(declarer, times(1)).declareStream(eq("error"), argThat(new FieldsMatcher("message")));
    Map stormConf = new HashMap();
    when(bulkMessageWriter.getName()).thenReturn("elasticsearch");
    bulkMessageWriterBolt.prepare(stormConf, topologyContext, outputCollector, clock);
    verify(bulkMessageWriter, times(1)).init(eq(stormConf), any(TopologyContext.class), any(WriterConfiguration.class));
    int batchTimeout = bulkMessageWriterBolt.getDefaultBatchTimeout();
    assertEquals(14, batchTimeout);
    tupleList = new ArrayList<>();
    messageList = new ArrayList<>();
    for (int i = 0; i < 3; i++) {
        when(tuple.getValueByField("message")).thenReturn(fullMessageList.get(i));
        tupleList.add(tuple);
        messageList.add(fullMessageList.get(i));
        bulkMessageWriterBolt.execute(tuple);
        verify(bulkMessageWriter, times(0)).write(eq(sensorType), any(WriterConfiguration.class), eq(tupleList), eq(messageList));
    }
    when(tuple.getValueByField("message")).thenReturn(null);
    // mark the tuple as a TickTuple, part 1 of 2
    when(tuple.getSourceComponent()).thenReturn("__system");
    // mark the tuple as a TickTuple, part 2 of 2
    when(tuple.getSourceStreamId()).thenReturn("__tick");
    BulkWriterResponse response = new BulkWriterResponse();
    response.addAllSuccesses(tupleList);
    when(bulkMessageWriter.write(eq(sensorType), any(WriterConfiguration.class), eq(tupleList), argThat(new MessageListMatcher(messageList)))).thenReturn(response);
    clock.advanceToSeconds(2);
    bulkMessageWriterBolt.execute(tuple);
    verify(bulkMessageWriter, times(0)).write(eq(sensorType), any(WriterConfiguration.class), eq(tupleList), argThat(new MessageListMatcher(messageList)));
    // 1 tick
    verify(outputCollector, times(1)).ack(tuple);
    clock.advanceToSeconds(9);
    bulkMessageWriterBolt.execute(tuple);
    verify(bulkMessageWriter, times(1)).write(eq(sensorType), any(WriterConfiguration.class), eq(tupleList), argThat(new MessageListMatcher(messageList)));
    assertEquals(3, tupleList.size());
    // 3 messages + 2nd tick
    verify(outputCollector, times(5)).ack(tuple);
}
Also used : BulkMessageWriterBolt(org.apache.metron.writer.bolt.BulkMessageWriterBolt) HashMap(java.util.HashMap) FakeClock(org.apache.metron.common.system.FakeClock) WriterConfiguration(org.apache.metron.common.configuration.writer.WriterConfiguration) TopologyContext(org.apache.storm.task.TopologyContext) HashMap(java.util.HashMap) Map(java.util.Map) FileInputStream(java.io.FileInputStream) BulkWriterResponse(org.apache.metron.common.writer.BulkWriterResponse) BaseEnrichmentBoltTest(org.apache.metron.test.bolt.BaseEnrichmentBoltTest) Test(org.junit.Test)

Aggregations

TopologyContext (org.apache.storm.task.TopologyContext)62 Test (org.junit.Test)29 HashMap (java.util.HashMap)25 OutputCollector (org.apache.storm.task.OutputCollector)19 Tuple (org.apache.storm.tuple.Tuple)16 SpoutOutputCollector (org.apache.storm.spout.SpoutOutputCollector)15 Map (java.util.Map)14 GlobalStreamId (org.apache.storm.generated.GlobalStreamId)8 ClientConfiguration (org.apache.pulsar.client.api.ClientConfiguration)7 Test (org.testng.annotations.Test)7 WriterConfiguration (org.apache.metron.common.configuration.writer.WriterConfiguration)6 BulkWriterResponse (org.apache.metron.common.writer.BulkWriterResponse)6 Collections (java.util.Collections)5 Grouping (org.apache.storm.generated.Grouping)5 StormTopology (org.apache.storm.generated.StormTopology)5 GeneralTopologyContext (org.apache.storm.task.GeneralTopologyContext)5 OutputCollectorImpl (org.apache.storm.task.OutputCollectorImpl)5 IRichBolt (org.apache.storm.topology.IRichBolt)5 IRichSpout (org.apache.storm.topology.IRichSpout)5 ParserConfigurations (org.apache.metron.common.configuration.ParserConfigurations)4