Search in sources :

Example 1 with Datapoint

use of org.openkilda.messaging.info.Datapoint in project open-kilda by telstra.

the class DatapointParseBolt method execute.

@Override
public void execute(Tuple tuple) {
    final String data = tuple.getString(0);
    LOGGER.debug("Processing datapoint: " + data);
    try {
        Datapoint datapoint = MAPPER.readValue(data, Datapoint.class);
        List<Object> stream = Stream.of(datapoint.hashCode(), datapoint).collect(Collectors.toList());
        collector.emit(stream);
    } catch (Exception e) {
        LOGGER.error("Failed reading data: " + data, e);
    } finally {
        collector.ack(tuple);
    }
}
Also used : Datapoint(org.openkilda.messaging.info.Datapoint)

Example 2 with Datapoint

use of org.openkilda.messaging.info.Datapoint in project open-kilda by telstra.

the class OpenTSDBTopologyTest method shouldSendDatapointRequestsTwice.

@Ignore
@Test
public void shouldSendDatapointRequestsTwice() throws Exception {
    Datapoint datapoint1 = new Datapoint("metric", timestamp, Collections.emptyMap(), 123);
    String jsonDatapoint1 = MAPPER.writeValueAsString(datapoint1);
    Datapoint datapoint2 = new Datapoint("metric", timestamp, Collections.emptyMap(), 456);
    String jsonDatapoint2 = MAPPER.writeValueAsString(datapoint2);
    MockedSources sources = new MockedSources();
    sources.addMockData(Topic.OTSDB + "-spout", new Values(jsonDatapoint1), new Values(jsonDatapoint2));
    completeTopologyParam.setMockedSources(sources);
    Testing.withTrackedCluster(clusterParam, (cluster) -> {
        OpenTSDBTopology topology = new TestingTargetTopology(new TestingKafkaBolt());
        StormTopology stormTopology = topology.createTopology();
        Testing.completeTopology(cluster, stormTopology, completeTopologyParam);
    });
    // verify that request is sent to OpenTSDB server once
    mockServer.verify(HttpRequest.request(), VerificationTimes.exactly(2));
}
Also used : Datapoint(org.openkilda.messaging.info.Datapoint) MockedSources(org.apache.storm.testing.MockedSources) StormTopology(org.apache.storm.generated.StormTopology) Values(org.apache.storm.tuple.Values) TestingKafkaBolt(org.openkilda.wfm.topology.TestingKafkaBolt) Ignore(org.junit.Ignore) StableAbstractStormTest(org.openkilda.wfm.StableAbstractStormTest) Test(org.junit.Test)

Example 3 with Datapoint

use of org.openkilda.messaging.info.Datapoint in project open-kilda by telstra.

the class OpenTSDBFilterBolt method isUpdateRequired.

private boolean isUpdateRequired(Datapoint datapoint) {
    boolean update = true;
    if (storage.containsKey(datapoint.hashCode())) {
        Datapoint prevDatapoint = storage.get(datapoint.hashCode());
        update = !prevDatapoint.getValue().equals(datapoint.getValue()) || datapoint.getTime() - prevDatapoint.getTime() >= TEN_MINUTES;
    }
    return update;
}
Also used : Datapoint(org.openkilda.messaging.info.Datapoint)

Example 4 with Datapoint

use of org.openkilda.messaging.info.Datapoint in project open-kilda by telstra.

the class OpenTSDBFilterBolt method execute.

@Override
public void execute(Tuple tuple) {
    if (!tuple.contains("datapoint")) {
        // TODO: Should make sure tuple comes from correct bolt, ie not TickTuple
        collector.ack(tuple);
        return;
    }
    Datapoint datapoint = (Datapoint) tuple.getValueByField("datapoint");
    if (isUpdateRequired(datapoint)) {
        addDatapoint(datapoint);
        List<Object> stream = Stream.of(datapoint.getMetric(), datapoint.getTime(), datapoint.getValue(), datapoint.getTags()).collect(Collectors.toList());
        LOGGER.debug("emit: " + stream);
        collector.emit(stream);
    }
    collector.ack(tuple);
}
Also used : Datapoint(org.openkilda.messaging.info.Datapoint)

Example 5 with Datapoint

use of org.openkilda.messaging.info.Datapoint in project open-kilda by telstra.

the class IslStatsBoltTest method buildTsdbTuple.

@Test
public void buildTsdbTuple() throws Exception {
    List<Object> tsdbTuple = statsBolt.buildTsdbTuple(islInfoData, TIMESTAMP);
    assertThat(tsdbTuple.size(), is(1));
    Datapoint datapoint = Utils.MAPPER.readValue(tsdbTuple.get(0).toString(), Datapoint.class);
    assertEquals("pen.isl.latency", datapoint.getMetric());
    assertEquals((Long) TIMESTAMP, datapoint.getTime());
    assertEquals(LATENCY, datapoint.getValue());
    Map<String, String> pathNode = datapoint.getTags();
    assertEquals(SWITCH1_ID, pathNode.get("src_switch"));
    assertEquals(SWITCH2_ID, pathNode.get("dst_switch"));
    assertEquals(SWITCH1_PORT, Integer.parseInt(pathNode.get("src_port")));
    assertEquals(SWITCH2_PORT, Integer.parseInt(pathNode.get("dst_port")));
}
Also used : Datapoint(org.openkilda.messaging.info.Datapoint) CoreMatchers.containsString(org.hamcrest.CoreMatchers.containsString) Test(org.junit.Test)

Aggregations

Datapoint (org.openkilda.messaging.info.Datapoint)9 Test (org.junit.Test)5 StormTopology (org.apache.storm.generated.StormTopology)4 MockedSources (org.apache.storm.testing.MockedSources)4 Values (org.apache.storm.tuple.Values)4 Ignore (org.junit.Ignore)4 StableAbstractStormTest (org.openkilda.wfm.StableAbstractStormTest)4 TestingKafkaBolt (org.openkilda.wfm.topology.TestingKafkaBolt)4 File (java.io.File)1 IOException (java.io.IOException)1 java.util (java.util)1 Map (java.util.Map)1 Collectors.toList (java.util.stream.Collectors.toList)1 IntStream (java.util.stream.IntStream)1 Testing (org.apache.storm.Testing)1 KafkaBolt (org.apache.storm.kafka.bolt.KafkaBolt)1 FixedTuple (org.apache.storm.testing.FixedTuple)1 TopologyBuilder (org.apache.storm.topology.TopologyBuilder)1 CoreMatchers.containsString (org.hamcrest.CoreMatchers.containsString)1 CoreMatchers.is (org.hamcrest.CoreMatchers.is)1