Search in sources :

Example 91 with StormTopology

use of org.apache.storm.generated.StormTopology in project streamline by hortonworks.

the class NormalizationTopologyTest method testNormalizationTopology.

public void testNormalizationTopology(NormalizationProcessor normalizationProcessor) throws Exception {
    final Config config = new Config();
    config.setDebug(true);
    final String topologyName = "SplitJoinTopologyTest";
    final StormTopology topology = createTopology(normalizationProcessor);
    log.info("Created topology with name: [{}] and topology: [{}]", topologyName, topology);
    ILocalCluster localCluster = new LocalCluster();
    log.info("Submitting topology: [{}]", topologyName);
    localCluster.submitTopology(topologyName, config, topology);
    Thread.sleep(2000);
    localCluster.shutdown();
}
Also used : LocalCluster(org.apache.storm.LocalCluster) ILocalCluster(org.apache.storm.ILocalCluster) ILocalCluster(org.apache.storm.ILocalCluster) Config(org.apache.storm.Config) StormTopology(org.apache.storm.generated.StormTopology)

Example 92 with StormTopology

use of org.apache.storm.generated.StormTopology in project incubator-heron by apache.

the class GeneralTopologyContext method getRawTopology.

/**
 * Gets the Thrift object representing the topology.
 *
 * @return the Thrift definition representing the topology
 */
@SuppressWarnings("deprecation")
public StormTopology getRawTopology() {
    StormTopology stormTopology = new StormTopology();
    Map<String, SpoutSpec> spouts = new HashMap<>();
    for (TopologyAPI.Spout spout : this.delegate.getRawTopology().getSpoutsList()) {
        spouts.put(spout.getComp().getName(), new SpoutSpec(spout));
    }
    Map<String, Bolt> bolts = new HashMap<>();
    for (TopologyAPI.Bolt bolt : this.delegate.getRawTopology().getBoltsList()) {
        bolts.put(bolt.getComp().getName(), new Bolt(bolt));
    }
    stormTopology.set_spouts(spouts);
    stormTopology.set_bolts(bolts);
    return stormTopology;
}
Also used : SpoutSpec(org.apache.storm.generated.SpoutSpec) HashMap(java.util.HashMap) StormTopology(org.apache.storm.generated.StormTopology) Bolt(org.apache.storm.generated.Bolt) TopologyAPI(com.twitter.heron.api.generated.TopologyAPI)

Example 93 with StormTopology

use of org.apache.storm.generated.StormTopology in project incubator-heron by apache.

the class EcoSubmitterTest method submitTopology_AllGood_BehavesAsExpected.

@Test
public void submitTopology_AllGood_BehavesAsExpected() throws Exception {
    Config config = new Config();
    StormTopology topology = new StormTopology();
    PowerMockito.spy(StormSubmitter.class);
    PowerMockito.doNothing().when(StormSubmitter.class, "submitTopology", any(String.class), any(Config.class), any(StormTopology.class));
    subject.submitTopology("name", config, topology);
    PowerMockito.verifyStatic(times(1));
    StormSubmitter.submitTopology(anyString(), any(Config.class), any(StormTopology.class));
}
Also used : Config(com.twitter.heron.api.Config) StormTopology(org.apache.storm.generated.StormTopology) Matchers.anyString(org.mockito.Matchers.anyString) PrepareForTest(org.powermock.core.classloader.annotations.PrepareForTest) Test(org.junit.Test)

Example 94 with StormTopology

use of org.apache.storm.generated.StormTopology in project open-kilda by telstra.

the class CacheTopologyTest method setupOnce.

@BeforeClass
public static void setupOnce() throws Exception {
    AbstractStormTest.setupOnce();
    flows.add(firstFlow);
    flows.add(secondFlow);
    topology = new CacheTopology(makeLaunchEnvironment());
    StormTopology stormTopology = topology.createTopology();
    Config config = stormConfig();
    cluster.submitTopology(CacheTopologyTest.class.getSimpleName(), config, stormTopology);
    teConsumer = new TestKafkaConsumer(topology.getConfig().getKafkaTopoEngTopic(), Destination.TOPOLOGY_ENGINE, kafkaProperties(UUID.nameUUIDFromBytes(Destination.TOPOLOGY_ENGINE.toString().getBytes()).toString()));
    teConsumer.start();
    flowConsumer = new TestKafkaConsumer(topology.getConfig().getKafkaFlowTopic(), Destination.WFM, kafkaProperties(UUID.nameUUIDFromBytes(Destination.WFM.toString().getBytes()).toString()));
    flowConsumer.start();
    ctrlConsumer = new TestKafkaConsumer(topology.getConfig().getKafkaCtrlTopic(), Destination.CTRL_CLIENT, kafkaProperties(UUID.nameUUIDFromBytes(Destination.CTRL_CLIENT.toString().getBytes()).toString()));
    ctrlConsumer.start();
}
Also used : TestKafkaConsumer(org.openkilda.wfm.topology.TestKafkaConsumer) Config(org.apache.storm.Config) StormTopology(org.apache.storm.generated.StormTopology)

Example 95 with StormTopology

use of org.apache.storm.generated.StormTopology in project open-kilda by telstra.

the class OFELinkBoltFloodTest method warmBoltOnHighLoadedTopic.

@Test(timeout = 5000 * 60)
public void warmBoltOnHighLoadedTopic() throws Exception {
    topology = new OFEventWFMTopology(makeLaunchEnvironment());
    teConsumer = new TestKafkaConsumer(topology.getConfig().getKafkaTopoEngTopic(), kafkaProperties(UUID.nameUUIDFromBytes(Destination.TOPOLOGY_ENGINE.toString().getBytes()).toString()));
    teConsumer.start();
    // Size of messages in topic before bolt start
    final int floodSize = 100000;
    SwitchInfoData data = new SwitchInfoData("switchId", SwitchState.ADDED, "address", "hostname", "description", "controller");
    InfoMessage message = new InfoMessage(data, System.currentTimeMillis(), UUID.randomUUID().toString());
    // Floooding
    sendMessages(message, topology.getConfig().getKafkaTopoDiscoTopic(), floodSize);
    StormTopology stormTopology = topology.createTopology();
    Config config = stormConfig();
    cluster.submitTopology(OFELinkBoltFloodTest.class.getSimpleName(), config, stormTopology);
    NetworkInfoData dump = new NetworkInfoData("test", Collections.emptySet(), Collections.emptySet(), Collections.emptySet(), Collections.emptySet());
    InfoMessage info = new InfoMessage(dump, 0, DEFAULT_CORRELATION_ID, Destination.WFM);
    String request = objectMapper.writeValueAsString(info);
    // Send DumpMessage to topic with offset floodSize+1.
    kProducer.pushMessage(topology.getConfig().getKafkaTopoDiscoTopic(), request);
    // Wait all messages
    int pooled = 0;
    while (pooled < floodSize) {
        if (teConsumer.pollMessage() != null)
            ++pooled;
    }
    assertEquals(floodSize, pooled);
}
Also used : NetworkInfoData(org.openkilda.messaging.info.discovery.NetworkInfoData) TestKafkaConsumer(org.openkilda.wfm.topology.TestKafkaConsumer) InfoMessage(org.openkilda.messaging.info.InfoMessage) Config(org.apache.storm.Config) StormTopology(org.apache.storm.generated.StormTopology) SwitchInfoData(org.openkilda.messaging.info.event.SwitchInfoData) AbstractStormTest(org.openkilda.wfm.AbstractStormTest) Test(org.junit.Test)

Aggregations

StormTopology (org.apache.storm.generated.StormTopology)162 Config (org.apache.storm.Config)72 HashMap (java.util.HashMap)67 Test (org.junit.Test)59 TopologyBuilder (org.apache.storm.topology.TopologyBuilder)44 Map (java.util.Map)35 ArrayList (java.util.ArrayList)29 TopologyDetails (org.apache.storm.scheduler.TopologyDetails)27 Test (org.junit.jupiter.api.Test)26 List (java.util.List)24 Bolt (org.apache.storm.generated.Bolt)23 Values (org.apache.storm.tuple.Values)23 StormMetricsRegistry (org.apache.storm.metric.StormMetricsRegistry)22 Cluster (org.apache.storm.scheduler.Cluster)22 SupervisorDetails (org.apache.storm.scheduler.SupervisorDetails)22 Topologies (org.apache.storm.scheduler.Topologies)22 Fields (org.apache.storm.tuple.Fields)22 INimbus (org.apache.storm.scheduler.INimbus)21 TopologyDef (org.apache.storm.flux.model.TopologyDef)20 TestUtilsForResourceAwareScheduler (org.apache.storm.scheduler.resource.TestUtilsForResourceAwareScheduler)20