Search in sources :

Example 16 with Message

use of com.ibm.streamsx.topology.tuple.Message in project streamsx.topology by IBMStreams.

the class MqttStreamsTest method testReusableApp.

@Test
public void testReusableApp() throws Exception {
    checkAssumes();
    setupDebug();
    Topology top = new Topology("testReusableApp");
    MsgGenerator mgen = new MsgGenerator(top.getName());
    String subClientId = newSubClientId(top.getName());
    String pubClientId = newPubClientId(top.getName());
    String topic = getMqttTopics()[0];
    List<Message> msgs = createMsgs(mgen, null);
    // Test an app structured more as a "reusable asset" - i.e.,
    // where the mqtt connection info (URI, authInfo) and
    // topic are defined at submission time.
    // define/create the app's submission parameters
    ParameterHelper params = new ParameterHelper(top);
    params.definitions().put("mqtt.serverURI", String.class);
    params.definitions().put("mqtt.userID", System.getProperty("user.name"));
    params.definitions().put("mqtt.password", String.class);
    params.definitions().put("mqtt.pub.topic", String.class);
    params.definitions().put("mqtt.sub.topic", String.class);
    params.createAll();
    // add the actual param values for our call to submit()
    Map<String, Object> submitParams = new HashMap<>();
    submitParams.put("mqtt.serverURI", "tcp://localhost:1883");
    // submitParams.put("mqtt.userID", System.getProperty("user.name"));
    submitParams.put("mqtt.password", "myMosquittoPw");
    submitParams.put("mqtt.pub.topic", topic);
    submitParams.put("mqtt.sub.topic", topic);
    getConfig().put(ContextProperties.SUBMISSION_PARAMS, submitParams);
    // Produce and consume the msgs
    Map<String, Object> pconfig = createConfig(pubClientId);
    addMqttParams(pconfig, false, params);
    Map<String, Object> cconfig = createConfig(subClientId);
    addMqttParams(cconfig, true, params);
    MqttStreams producer = new MqttStreams(top, pconfig);
    MqttStreams consumer = new MqttStreams(top, cconfig);
    TStream<Message> msgsToPublish = top.constants(msgs).modify(new InitialDelay<Message>(PUB_DELAY_MSEC));
    TSink sink = producer.publish(msgsToPublish, params.getString("mqtt.pub.topic"));
    TStream<Message> rcvdMsgs = consumer.subscribe(params.getString("mqtt.sub.topic"));
    // for validation...
    rcvdMsgs.print();
    // just our msgs
    rcvdMsgs = selectMsgs(rcvdMsgs, mgen.pattern());
    TStream<String> rcvdAsString = rcvdMsgs.transform(msgToJSONStringFunc());
    msgs = modifyList(msgs, setTopic(topic));
    List<String> expectedAsString = mapList(msgs, msgToJSONStringFunc());
    if (testBuildOnly(top))
        return;
    completeAndValidate(subClientId, top, rcvdAsString, SEC_TIMEOUT, expectedAsString.toArray(new String[0]));
    assertTrue(sink != null);
}
Also used : TSink(com.ibm.streamsx.topology.TSink) MqttStreams(com.ibm.streamsx.topology.messaging.mqtt.MqttStreams) SimpleMessage(com.ibm.streamsx.topology.tuple.SimpleMessage) Message(com.ibm.streamsx.topology.tuple.Message) HashMap(java.util.HashMap) Topology(com.ibm.streamsx.topology.Topology) TestTopology(com.ibm.streamsx.topology.test.TestTopology) JSONObject(com.ibm.json.java.JSONObject) Test(org.junit.Test)

Example 17 with Message

use of com.ibm.streamsx.topology.tuple.Message in project streamsx.topology by IBMStreams.

the class KafkaConsumer method subscribe.

/**
     * Subscribe to a topic and create a stream of messages.
     * <p>
     * N.B., A topology that includes this will not support
     * {@code StreamsContext.Type.EMBEDDED}.
     * <p>
     * N.B. due to com.ibm.streamsx.messaging 
     * <a href="https://github.com/IBMStreams/streamsx.messaging/issues/118">issue#118</a>,
     * multiple consumers will have issues in
     * {@code StreamsContext.Type.STANDALONE}.
     * <p>
     * N.B. due to com.ibm.streamsx.messaging 
     * <a href="https://github.com/IBMStreams/streamsx.messaging/issues/117">issue#117</a>,
     * a consumer in {@code StreamsContext.Type.STANDALONE} subsequently results
     * in an orphaned @{code standalone} processes that continues as the lead
     * group/topic consumer thereby preventing subsequent instances of the
     * group/topic consumer from receiving messages. 
     * <p>
     * N.B. due to com.ibm.streamsx.messaging 
     * <a href="https://github.com/IBMStreams/streamsx.messaging/issues/114">issue#114</a>,
     * a consumer essentially ignores messages generated by producers where the
     * optional {@code key} is {@code null}.
     * e.g., Kafka's {@code kafka-console-producer.sh} tool generates
     * {@code key==null} messages.
     *
     * @param threadsPerTopic number of threads to allocate to processing each
     *        topic.  May be a submission parameter.
     * @param topic the topic to subscribe to.  May be a submission parameter.
     * @return TStream&lt;Message>
     *      The generated {@code Message} tuples have a non-null {@code topic}.
     *      The tuple's {@code key} will be null if the Kafka message
     *      lacked a key or it's key was the empty string. 
     * @throws IllegalArgumentException if topic is null.
     * @throws IllegalArgumentException if threadsPerTopic is null.
     * @see Value
     * @see Topology#createSubmissionParameter(String, Class)
     */
public TStream<Message> subscribe(Supplier<Integer> threadsPerTopic, Supplier<String> topic) {
    if (topic == null)
        throw new IllegalArgumentException("topic");
    if (threadsPerTopic == null || (threadsPerTopic.get() != null && threadsPerTopic.get() <= 0))
        throw new IllegalArgumentException("threadsPerTopic");
    Map<String, Object> params = new HashMap<>();
    params.put("topic", topic);
    if (threadsPerTopic instanceof Value && threadsPerTopic.get() == 1)
        // The default is one.
        ;
    else
        params.put("threadsPerTopic", threadsPerTopic);
    if (!config.isEmpty())
        params.put("kafkaProperty", Util.toKafkaProperty(config));
    // workaround streamsx.messaging issue #107
    params.put("propertiesFile", PROP_FILE_PARAM);
    addPropertiesFile();
    // Use SPL.invoke to avoid adding a compile time dependency
    // to com.ibm.streamsx.messaging since JavaPrimitive.invoke*()
    // lack "kind" based variants.
    String kind = "com.ibm.streamsx.messaging.kafka::KafkaConsumer";
    String className = "com.ibm.streamsx.messaging.kafka.KafkaSource";
    SPLStream rawKafka = SPL.invokeSource(te, kind, params, KafkaSchemas.KAFKA);
    SPL.tagOpAsJavaPrimitive(toOp(rawKafka), kind, className);
    TStream<Message> rcvdMsgs = toMessageStream(rawKafka);
    rcvdMsgs.colocate(rawKafka);
    // workaround streamsx.messaging issue#118 w/java8
    // isolate even in the single consumer case since we don't
    // know if others may be subsequently created.
    rcvdMsgs = rcvdMsgs.isolate();
    return rcvdMsgs;
}
Also used : SimpleMessage(com.ibm.streamsx.topology.tuple.SimpleMessage) Message(com.ibm.streamsx.topology.tuple.Message) HashMap(java.util.HashMap) Value(com.ibm.streamsx.topology.logic.Value) SPLStream(com.ibm.streamsx.topology.spl.SPLStream)

Example 18 with Message

use of com.ibm.streamsx.topology.tuple.Message in project streamsx.topology by IBMStreams.

the class MqttStreams method publish.

/**
     * Publish {@code stream} tuples to one or more MQTT topics.
     * <p>
     * If {@code topic} is null, each tuple is published to the topic
     * specified by its {@link Message#getTopic()}.
     * Otherwise, all tuples are published to {@code topic}.
     * <p>
     * The messages added to MQTT include a topic and message.
     * The {@link Message#getKey()} field is ignored.
     * <p>
     * The message is handled with the quality of service
     * indicated by configuration property {@code defaultQOS}.
     * 
     * @param stream the stream to publish
     * @param topic topic to publish to.  May be a submission parameter. May be null.
     * @return the sink element
     * @see Value
     * @see Topology#createSubmissionParameter(String, Class)
     */
public TSink publish(TStream<? extends Message> stream, Supplier<String> topic) {
    stream = stream.lowLatency();
    @SuppressWarnings("unchecked") SPLStream splStream = SPLStreams.convertStream((TStream<Message>) stream, cvtMsgFunc(topic), MqttSchemas.MQTT);
    Map<String, Object> params = new HashMap<String, Object>();
    params.put("reconnectionBound", -1);
    params.put("qos", 0);
    params.putAll(Util.configToSplParams(config));
    params.remove("messageQueueSize");
    if (topic == null)
        params.put("topicAttributeName", "topic");
    else
        params.put("topic", topic);
    params.put("dataAttributeName", "message");
    if (++opCnt > 1) {
        // each op requires its own clientID
        String clientId = (String) params.get("clientID");
        if (clientId != null && clientId.length() > 0)
            params.put("clientID", opCnt + "-" + clientId);
    }
    // Use SPL.invoke to avoid adding a compile time dependency
    // to com.ibm.streamsx.messaging since JavaPrimitive.invoke*()
    // lack "kind" based variants.
    String kind = "com.ibm.streamsx.messaging.mqtt::MQTTSink";
    String className = "com.ibm.streamsx.messaging.kafka.MqttSinkOperator";
    TSink sink = SPL.invokeSink(kind, splStream, params);
    SPL.tagOpAsJavaPrimitive(sink.operator(), kind, className);
    return sink;
}
Also used : TSink(com.ibm.streamsx.topology.TSink) SimpleMessage(com.ibm.streamsx.topology.tuple.SimpleMessage) Message(com.ibm.streamsx.topology.tuple.Message) HashMap(java.util.HashMap) SPLStream(com.ibm.streamsx.topology.spl.SPLStream)

Example 19 with Message

use of com.ibm.streamsx.topology.tuple.Message in project streamsx.topology by IBMStreams.

the class KafkaStreamsTest method testSubtypeExplicitTopicProducer.

@Test
public void testSubtypeExplicitTopicProducer() throws Exception {
    checkAssumes();
    Topology top = new Topology("testSubtypeExplicitTopicProducer");
    MsgGenerator mgen = new MsgGenerator(top.getName());
    String groupId = newGroupId(top.getName());
    String topicVal = getKafkaTopics()[0];
    Supplier<String> topic = new Value<String>(topicVal);
    KafkaProducer producer = new KafkaProducer(top, createProducerConfig());
    KafkaConsumer consumer = new KafkaConsumer(top, createConsumerConfig(groupId));
    // Test producer that takes a TStream<MyMsgSubtype>
    List<MyMsgSubtype> msgs = new ArrayList<>();
    msgs.add(new MyMsgSubtype(mgen.create(topicVal, "Hello")));
    msgs.add(new MyMsgSubtype(mgen.create(topicVal, "key1", "Are you there?"), "key1"));
    TStream<MyMsgSubtype> msgsToPublish = top.constants(msgs).asType(MyMsgSubtype.class);
    msgsToPublish = msgsToPublish.modify(new InitialDelay<MyMsgSubtype>(PUB_DELAY_MSEC));
    producer.publish(msgsToPublish, topic);
    TStream<Message> rcvdMsgs = consumer.subscribe(topic);
    // for validation...
    rcvdMsgs.print();
    // just our msgs
    rcvdMsgs = selectMsgs(rcvdMsgs, mgen.pattern());
    TStream<String> rcvdAsString = rcvdMsgs.transform(msgToJSONStringFunc());
    List<String> expectedAsString = mapList(msgs, subtypeMsgToJSONStringFunc(topicVal));
    setupDebug();
    if (testBuildOnly(top))
        return;
    completeAndValidate(groupId, top, rcvdAsString, SEC_TIMEOUT, expectedAsString.toArray(new String[0]));
}
Also used : KafkaProducer(com.ibm.streamsx.topology.messaging.kafka.KafkaProducer) InitialDelay(com.ibm.streamsx.topology.test.InitialDelay) SimpleMessage(com.ibm.streamsx.topology.tuple.SimpleMessage) Message(com.ibm.streamsx.topology.tuple.Message) ArrayList(java.util.ArrayList) KafkaConsumer(com.ibm.streamsx.topology.messaging.kafka.KafkaConsumer) Topology(com.ibm.streamsx.topology.Topology) TestTopology(com.ibm.streamsx.topology.test.TestTopology) Value(com.ibm.streamsx.topology.logic.Value) Test(org.junit.Test)

Example 20 with Message

use of com.ibm.streamsx.topology.tuple.Message in project streamsx.topology by IBMStreams.

the class KafkaStreamsTest method testMsgImplProducer.

@Test
public void testMsgImplProducer() throws Exception {
    checkAssumes();
    Topology top = new Topology("testMsgImplProducer");
    MsgGenerator mgen = new MsgGenerator(top.getName());
    String groupId = newGroupId(top.getName());
    String topicVal = getKafkaTopics()[0];
    Supplier<String> topic = new Value<String>(topicVal);
    KafkaProducer producer = new KafkaProducer(top, createProducerConfig());
    KafkaConsumer consumer = new KafkaConsumer(top, createConsumerConfig(groupId));
    // Test producer that takes TStream<SimpleMessage> and an explicit topic.
    List<Message> msgs = new ArrayList<>();
    msgs.add(new SimpleMessage(mgen.create(topicVal, "Hello")));
    msgs.add(new SimpleMessage(mgen.create(topicVal, "key1", "Are you there?"), "key1"));
    TStream<Message> msgsToPublish = top.constants(msgs);
    msgsToPublish = msgsToPublish.modify(new InitialDelay<Message>(PUB_DELAY_MSEC));
    producer.publish(msgsToPublish, topic);
    TStream<Message> rcvdMsgs = consumer.subscribe(topic);
    // for validation...
    rcvdMsgs.print();
    // just our msgs
    rcvdMsgs = selectMsgs(rcvdMsgs, mgen.pattern());
    TStream<String> rcvdAsString = rcvdMsgs.transform(msgToJSONStringFunc());
    msgs = modifyList(msgs, setTopic(topicVal));
    List<String> expectedAsString = mapList(msgs, msgToJSONStringFunc());
    setupDebug();
    if (testBuildOnly(top))
        return;
    completeAndValidate(groupId, top, rcvdAsString, SEC_TIMEOUT, expectedAsString.toArray(new String[0]));
}
Also used : KafkaProducer(com.ibm.streamsx.topology.messaging.kafka.KafkaProducer) InitialDelay(com.ibm.streamsx.topology.test.InitialDelay) SimpleMessage(com.ibm.streamsx.topology.tuple.SimpleMessage) Message(com.ibm.streamsx.topology.tuple.Message) SimpleMessage(com.ibm.streamsx.topology.tuple.SimpleMessage) ArrayList(java.util.ArrayList) KafkaConsumer(com.ibm.streamsx.topology.messaging.kafka.KafkaConsumer) Topology(com.ibm.streamsx.topology.Topology) TestTopology(com.ibm.streamsx.topology.test.TestTopology) Value(com.ibm.streamsx.topology.logic.Value) Test(org.junit.Test)

Aggregations

Message (com.ibm.streamsx.topology.tuple.Message)26 SimpleMessage (com.ibm.streamsx.topology.tuple.SimpleMessage)25 Topology (com.ibm.streamsx.topology.Topology)20 Test (org.junit.Test)19 TestTopology (com.ibm.streamsx.topology.test.TestTopology)18 Value (com.ibm.streamsx.topology.logic.Value)15 TSink (com.ibm.streamsx.topology.TSink)13 MqttStreams (com.ibm.streamsx.topology.messaging.mqtt.MqttStreams)12 ArrayList (java.util.ArrayList)12 KafkaConsumer (com.ibm.streamsx.topology.messaging.kafka.KafkaConsumer)8 KafkaProducer (com.ibm.streamsx.topology.messaging.kafka.KafkaProducer)8 InitialDelay (com.ibm.streamsx.topology.test.InitialDelay)7 HashMap (java.util.HashMap)6 SPLStream (com.ibm.streamsx.topology.spl.SPLStream)4 JSONObject (com.ibm.json.java.JSONObject)3 File (java.io.File)2