Search in sources :

Example 16 with BoltDeclarer

use of org.apache.storm.topology.BoltDeclarer in project heron by twitter.

the class StreamBuilder method buildStreams.

protected void buildStreams(EcoExecutionContext executionContext, TopologyBuilder builder, ObjectBuilder objectBuilder) throws IllegalAccessException, InstantiationException, ClassNotFoundException, NoSuchFieldException, InvocationTargetException {
    EcoTopologyDefinition topologyDefinition = executionContext.getTopologyDefinition();
    Map<String, ComponentStream> componentStreams = new HashMap<>();
    HashMap<String, BoltDeclarer> declarers = new HashMap<>();
    for (StreamDefinition stream : topologyDefinition.getStreams()) {
        Object boltObj = executionContext.getBolt(stream.getTo());
        BoltDeclarer declarer = declarers.get(stream.getTo());
        if (boltObj instanceof IRichBolt) {
            if (declarer == null) {
                declarer = builder.setBolt(stream.getTo(), (IRichBolt) boltObj, topologyDefinition.parallelismForBolt(stream.getTo()));
                declarers.put(stream.getTo(), declarer);
            }
        } else if (boltObj instanceof IBasicBolt) {
            if (declarer == null) {
                declarer = builder.setBolt(stream.getTo(), (IBasicBolt) boltObj, topologyDefinition.parallelismForBolt(stream.getTo()));
                declarers.put(stream.getTo(), declarer);
            }
        } else if (boltObj instanceof IWindowedBolt) {
            if (declarer == null) {
                declarer = builder.setBolt(stream.getTo(), (IWindowedBolt) boltObj, topologyDefinition.parallelismForBolt(stream.getTo()));
                declarers.put(stream.getTo(), declarer);
            }
        } else {
            throw new IllegalArgumentException("Class does not appear to be a bolt: " + boltObj.getClass().getName());
        }
        GroupingDefinition grouping = stream.getGrouping();
        // if the streamId is defined, use it for the grouping,
        // otherwise assume default stream
        String streamId = grouping.getStreamId() == null ? Utils.DEFAULT_STREAM_ID : grouping.getStreamId();
        switch(grouping.getType()) {
            case SHUFFLE:
                declarer.shuffleGrouping(stream.getFrom(), streamId);
                break;
            case FIELDS:
                List<String> groupingArgs = grouping.getArgs();
                if (groupingArgs == null) {
                    throw new IllegalArgumentException("You must supply arguments for Fields grouping");
                }
                declarer.fieldsGrouping(stream.getFrom(), streamId, new Fields(groupingArgs));
                break;
            case ALL:
                declarer.allGrouping(stream.getFrom(), streamId);
                break;
            case GLOBAL:
                declarer.globalGrouping(stream.getFrom(), streamId);
                break;
            case NONE:
                declarer.noneGrouping(stream.getFrom(), streamId);
                break;
            case CUSTOM:
                declarer.customGrouping(stream.getFrom(), streamId, buildCustomStreamGrouping(stream.getGrouping().getCustomClass(), executionContext, objectBuilder));
                break;
            default:
                throw new UnsupportedOperationException("unsupported grouping type: " + grouping);
        }
    }
    executionContext.setStreams(componentStreams);
}
Also used : IRichBolt(org.apache.storm.topology.IRichBolt) ComponentStream(org.apache.heron.eco.definition.ComponentStream) StreamDefinition(org.apache.heron.eco.definition.StreamDefinition) HashMap(java.util.HashMap) EcoTopologyDefinition(org.apache.heron.eco.definition.EcoTopologyDefinition) GroupingDefinition(org.apache.heron.eco.definition.GroupingDefinition) IBasicBolt(org.apache.storm.topology.IBasicBolt) Fields(org.apache.storm.tuple.Fields) BoltDeclarer(org.apache.storm.topology.BoltDeclarer) IWindowedBolt(org.apache.storm.topology.IWindowedBolt)

Example 17 with BoltDeclarer

use of org.apache.storm.topology.BoltDeclarer in project jstorm by alibaba.

the class PerformanceTestTopology method SetRemoteTopology.

public static void SetRemoteTopology() throws Exception {
    String streamName = (String) conf.get(Config.TOPOLOGY_NAME);
    if (streamName == null) {
        String[] className = Thread.currentThread().getStackTrace()[1].getClassName().split("\\.");
        streamName = className[className.length - 1];
    }
    TopologyBuilder builder = new TopologyBuilder();
    int spout_Parallelism_hint = Utils.getInt(conf.get(TOPOLOGY_SPOUT_PARALLELISM_HINT), 1);
    int bolt_Parallelism_hint = Utils.getInt(conf.get(TOPOLOGY_BOLT_PARALLELISM_HINT), 2);
    builder.setSpout("spout", new TestSpout(), spout_Parallelism_hint);
    BoltDeclarer boltDeclarer = builder.setBolt("bolt", new TestBolt(), bolt_Parallelism_hint);
    // localFirstGrouping is only for jstorm
    // boltDeclarer.localFirstGrouping(SequenceTopologyDef.SEQUENCE_SPOUT_NAME);
    boltDeclarer.shuffleGrouping("spout");
    // .addConfiguration(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 60);
    StormSubmitter.submitTopology(streamName, conf, builder.createTopology());
}
Also used : TopologyBuilder(org.apache.storm.topology.TopologyBuilder) BoltDeclarer(org.apache.storm.topology.BoltDeclarer)

Example 18 with BoltDeclarer

use of org.apache.storm.topology.BoltDeclarer in project metron by apache.

the class ParserTopologyBuilder method build.

/**
 * Builds a Storm topology that parses telemetry data received from an external sensor.
 *
 * @param zookeeperUrl             Zookeeper URL
 * @param brokerUrl                Kafka Broker URL
 * @param sensorTypes               Type of sensor
 * @param spoutParallelismSupplier         Supplier for the parallelism hint for the spout
 * @param spoutNumTasksSupplier            Supplier for the number of tasks for the spout
 * @param parserParallelismSupplier        Supplier for the parallelism hint for the parser bolt
 * @param parserNumTasksSupplier           Supplier for the number of tasks for the parser bolt
 * @param errorWriterParallelismSupplier   Supplier for the parallelism hint for the bolt that handles errors
 * @param errorWriterNumTasksSupplier      Supplier for the number of tasks for the bolt that handles errors
 * @param kafkaSpoutConfigSupplier         Supplier for the configuration options for the kafka spout
 * @param securityProtocolSupplier         Supplier for the security protocol
 * @param outputTopicSupplier              Supplier for the output kafka topic
 * @param stormConfigSupplier              Supplier for the storm config
 * @return A Storm topology that parses telemetry data received from an external sensor
 * @throws Exception
 */
public static ParserTopology build(String zookeeperUrl, Optional<String> brokerUrl, List<String> sensorTypes, ValueSupplier<List> spoutParallelismSupplier, ValueSupplier<List> spoutNumTasksSupplier, ValueSupplier<Integer> parserParallelismSupplier, ValueSupplier<Integer> parserNumTasksSupplier, ValueSupplier<Integer> errorWriterParallelismSupplier, ValueSupplier<Integer> errorWriterNumTasksSupplier, ValueSupplier<List> kafkaSpoutConfigSupplier, ValueSupplier<String> securityProtocolSupplier, ValueSupplier<String> outputTopicSupplier, ValueSupplier<String> errorTopicSupplier, ValueSupplier<Config> stormConfigSupplier) throws Exception {
    // fetch configuration from zookeeper
    ParserConfigurations configs = new ParserConfigurations();
    Map<String, SensorParserConfig> sensorToParserConfigs = getSensorParserConfig(zookeeperUrl, sensorTypes, configs);
    Collection<SensorParserConfig> parserConfigs = sensorToParserConfigs.values();
    @SuppressWarnings("unchecked") List<Integer> spoutParallelism = (List<Integer>) spoutParallelismSupplier.get(parserConfigs, List.class);
    @SuppressWarnings("unchecked") List<Integer> spoutNumTasks = (List<Integer>) spoutNumTasksSupplier.get(parserConfigs, List.class);
    int parserParallelism = parserParallelismSupplier.get(parserConfigs, Integer.class);
    int parserNumTasks = parserNumTasksSupplier.get(parserConfigs, Integer.class);
    int errorWriterParallelism = errorWriterParallelismSupplier.get(parserConfigs, Integer.class);
    int errorWriterNumTasks = errorWriterNumTasksSupplier.get(parserConfigs, Integer.class);
    String outputTopic = outputTopicSupplier.get(parserConfigs, String.class);
    List<Map<String, Object>> kafkaSpoutConfig = kafkaSpoutConfigSupplier.get(parserConfigs, List.class);
    Optional<String> securityProtocol = Optional.ofNullable(securityProtocolSupplier.get(parserConfigs, String.class));
    // create the spout
    TopologyBuilder builder = new TopologyBuilder();
    int i = 0;
    List<String> spoutIds = new ArrayList<>();
    for (Entry<String, SensorParserConfig> entry : sensorToParserConfigs.entrySet()) {
        KafkaSpout kafkaSpout = createKafkaSpout(zookeeperUrl, entry.getKey(), securityProtocol, Optional.ofNullable(kafkaSpoutConfig.get(i)), entry.getValue());
        String spoutId = sensorToParserConfigs.size() > 1 ? "kafkaSpout-" + entry.getKey() : "kafkaSpout";
        builder.setSpout(spoutId, kafkaSpout, spoutParallelism.get(i)).setNumTasks(spoutNumTasks.get(i));
        spoutIds.add(spoutId);
        ++i;
    }
    // create the parser bolt
    ParserBolt parserBolt = createParserBolt(zookeeperUrl, brokerUrl, sensorToParserConfigs, securityProtocol, configs, Optional.ofNullable(outputTopic));
    BoltDeclarer boltDeclarer = builder.setBolt("parserBolt", parserBolt, parserParallelism).setNumTasks(parserNumTasks);
    for (String spoutId : spoutIds) {
        boltDeclarer.localOrShuffleGrouping(spoutId);
    }
    // create the error bolt, if needed
    if (errorWriterNumTasks > 0) {
        String errorTopic = errorTopicSupplier.get(parserConfigs, String.class);
        WriterBolt errorBolt = createErrorBolt(zookeeperUrl, brokerUrl, sensorTypes.get(0), securityProtocol, configs, parserConfigs.iterator().next(), errorTopic);
        builder.setBolt("errorMessageWriter", errorBolt, errorWriterParallelism).setNumTasks(errorWriterNumTasks).localOrShuffleGrouping("parserBolt", Constants.ERROR_STREAM);
    }
    return new ParserTopology(builder, stormConfigSupplier.get(parserConfigs, Config.class));
}
Also used : TopologyBuilder(org.apache.storm.topology.TopologyBuilder) SensorParserConfig(org.apache.metron.common.configuration.SensorParserConfig) KafkaSpoutConfig(org.apache.storm.kafka.spout.KafkaSpoutConfig) ConsumerConfig(org.apache.kafka.clients.consumer.ConsumerConfig) Config(org.apache.storm.Config) ParserBolt(org.apache.metron.parsers.bolt.ParserBolt) ArrayList(java.util.ArrayList) SensorParserConfig(org.apache.metron.common.configuration.SensorParserConfig) WriterBolt(org.apache.metron.parsers.bolt.WriterBolt) BoltDeclarer(org.apache.storm.topology.BoltDeclarer) ParserConfigurations(org.apache.metron.common.configuration.ParserConfigurations) ArrayList(java.util.ArrayList) List(java.util.List) KafkaSpout(org.apache.storm.kafka.spout.KafkaSpout) StormKafkaSpout(org.apache.metron.storm.kafka.flux.StormKafkaSpout) HashMap(java.util.HashMap) Map(java.util.Map)

Example 19 with BoltDeclarer

use of org.apache.storm.topology.BoltDeclarer in project open-kilda by telstra.

the class FloodlightRouterTopology method kafkaOutput.

private TopologyOutput kafkaOutput(TopologyBuilder topology) {
    RegionAwareKafkaTopicSelector topicSelector = new RegionAwareKafkaTopicSelector();
    BoltDeclarer generic = declareBolt(topology, makeKafkaBolt(MessageSerializer.class).withTopicSelector(topicSelector), ComponentType.KAFKA_GENERIC_OUTPUT);
    BoltDeclarer hs = declareBolt(topology, makeKafkaBolt(AbstractMessageSerializer.class).withTopicSelector(topicSelector), ComponentType.KAFKA_HS_OUTPUT);
    return new TopologyOutput(generic, hs);
}
Also used : BoltDeclarer(org.apache.storm.topology.BoltDeclarer)

Example 20 with BoltDeclarer

use of org.apache.storm.topology.BoltDeclarer in project open-kilda by telstra.

the class FloodlightRouterTopology method controllerToSpeaker.

private void controllerToSpeaker(TopologyBuilder topology, TopologyOutput output) {
    BoltDeclarer kafkaProducer = output.getKafkaGenericOutput();
    declareKafkaSpout(topology, kafkaTopics.getSpeakerTopic(), ComponentType.SPEAKER_KAFKA_SPOUT);
    ControllerToSpeakerProxyBolt proxy = new ControllerToSpeakerSharedProxyBolt(kafkaTopics.getSpeakerRegionTopic(), regions, kafkaTopics, Duration.ofSeconds(topologyConfig.getSwitchMappingRemoveDelay()));
    declareBolt(topology, proxy, ComponentType.SPEAKER_REQUEST_BOLT).shuffleGrouping(ComponentType.SPEAKER_KAFKA_SPOUT).allGrouping(SwitchMonitorBolt.BOLT_ID, SwitchMonitorBolt.STREAM_REGION_MAPPING_ID).allGrouping(ZooKeeperSpout.SPOUT_ID);
    kafkaProducer.shuffleGrouping(ComponentType.SPEAKER_REQUEST_BOLT).shuffleGrouping(ComponentType.SPEAKER_REQUEST_BOLT, Stream.KILDA_SWITCH_MANAGER).shuffleGrouping(ComponentType.SPEAKER_REQUEST_BOLT, Stream.NORTHBOUND_REPLY);
}
Also used : ControllerToSpeakerSharedProxyBolt(org.openkilda.wfm.topology.floodlightrouter.bolts.ControllerToSpeakerSharedProxyBolt) BoltDeclarer(org.apache.storm.topology.BoltDeclarer) ControllerToSpeakerProxyBolt(org.openkilda.wfm.topology.floodlightrouter.bolts.ControllerToSpeakerProxyBolt)

Aggregations

BoltDeclarer (org.apache.storm.topology.BoltDeclarer)34 TopologyBuilder (org.apache.storm.topology.TopologyBuilder)20 HashMap (java.util.HashMap)13 SpoutDeclarer (org.apache.storm.topology.SpoutDeclarer)10 ArrayList (java.util.ArrayList)7 IRichBolt (org.apache.storm.topology.IRichBolt)6 Map (java.util.Map)5 IBasicBolt (org.apache.storm.topology.IBasicBolt)5 Config (org.apache.storm.Config)4 SharedMemory (org.apache.storm.generated.SharedMemory)4 KafkaSpout (org.apache.storm.kafka.spout.KafkaSpout)4 Fields (org.apache.storm.tuple.Fields)4 List (java.util.List)3 SourceArgs (org.apache.storm.coordination.CoordinatedBolt.SourceArgs)3 StormTopology (org.apache.storm.generated.StormTopology)3 KafkaBolt (org.apache.storm.kafka.bolt.KafkaBolt)3 CtrlBoltRef (org.openkilda.wfm.CtrlBoltRef)3 HashSet (java.util.HashSet)2 LinkedHashMap (java.util.LinkedHashMap)2 Set (java.util.Set)2