Search in sources :

Example 1 with TStream

use of com.ibm.streamsx.topology.TStream in project streamsx.topology by IBMStreams.

the class StreamImpl method publish.

@Override
public void publish(String topic, boolean allowFilter) {
    checkTopicName(topic);
    Type tupleType = getTupleType();
    if (JSONObject.class.equals(tupleType)) {
        filtersNotAllowed(allowFilter);
        @SuppressWarnings("unchecked") TStream<JSONObject> json = (TStream<JSONObject>) this;
        JSONStreams.toSPL(json).publish(topic, allowFilter);
        return;
    }
    BOperatorInvocation op;
    if (Schemas.usesDirectSchema(tupleType) || ((TStream<T>) this) instanceof SPLStream) {
        // would not allow a filter against.
        if (String.class != tupleType && !(((TStream<T>) this) instanceof SPLStream))
            filtersNotAllowed(allowFilter);
        // Publish as a stream consumable by SPL & Java/Scala
        Map<String, Object> publishParms = new HashMap<>();
        publishParms.put("topic", topic);
        publishParms.put("allowFilter", allowFilter);
        op = builder().addSPLOperator("Publish", "com.ibm.streamsx.topology.topic::Publish", publishParms);
    } else if (getTupleClass() != null) {
        filtersNotAllowed(allowFilter);
        // Publish as a stream consumable only by Java/Scala
        Map<String, Object> params = new HashMap<>();
        params.put("topic", topic);
        params.put("class", getTupleClass().getName());
        op = builder().addSPLOperator("Publish", "com.ibm.streamsx.topology.topic::PublishJava", params);
    } else {
        throw new IllegalStateException("A TStream with a tuple type that contains a generic or unknown type cannot be published");
    }
    SourceInfo.setSourceInfo(op, SPL.class);
    this.connectTo(op, false, null);
}
Also used : HashMap(java.util.HashMap) BOperatorInvocation(com.ibm.streamsx.topology.builder.BOperatorInvocation) SPLStream(com.ibm.streamsx.topology.spl.SPLStream) TStream(com.ibm.streamsx.topology.TStream) Type(java.lang.reflect.Type) JSONObject(com.ibm.json.java.JSONObject) JSONObject(com.ibm.json.java.JSONObject) Map(java.util.Map) HashMap(java.util.HashMap)

Example 2 with TStream

use of com.ibm.streamsx.topology.TStream in project streamsx.health by IBMStreams.

the class HealthDataBeaconService method build.

public void build() {
    TStream<Observation> ecgIStream = topo.periodicSource(new HealthcareDataGenerator(DEFAULT_PATIENT_ID, "src/main/resources/ecgI.csv", ReadingTypeCode.ECG_LEAD_I.getCode()), ECG_PERIOD, ECG_PERIOD_TIMEUNIT);
    TStream<Observation> respStream = topo.periodicSource(new HealthcareDataGenerator(DEFAULT_PATIENT_ID, "src/main/resources/resp.csv", ReadingTypeCode.RESP_RATE.getCode()), RESP_PERIOD, RESP_PERIOD_TIMEUNIT);
    TStream<Observation> abpDiasStream = topo.periodicSource(new ABPDiastolicDataGenerator(DEFAULT_PATIENT_ID), VITALS_PERIOD, VITALS_PERIOD_TIMEUNIT);
    TStream<Observation> abpSysStream = topo.periodicSource(new ABPSystolicDataGenerator(DEFAULT_PATIENT_ID), VITALS_PERIOD, VITALS_PERIOD_TIMEUNIT);
    TStream<Observation> hrStream = topo.periodicSource(new HeartRateDataGenerator(DEFAULT_PATIENT_ID), VITALS_PERIOD, VITALS_PERIOD_TIMEUNIT);
    TStream<Observation> spo2Stream = topo.periodicSource(new SpO2DataGenerator(DEFAULT_PATIENT_ID), VITALS_PERIOD, VITALS_PERIOD_TIMEUNIT);
    TStream<Observation> temperatureStream = topo.periodicSource(new TemperatureDataGenerator(DEFAULT_PATIENT_ID), VITALS_PERIOD, VITALS_PERIOD_TIMEUNIT);
    Set<TStream<Observation>> observations = new HashSet<TStream<Observation>>();
    observations.add(respStream);
    observations.add(abpDiasStream);
    observations.add(abpSysStream);
    observations.add(hrStream);
    observations.add(spo2Stream);
    observations.add(temperatureStream);
    TStream<Observation> allStreams = ecgIStream.union(observations);
    TStream<Observation> multiplePatientStream = allStreams.multiTransform(new Multiplier(numPatientsSupplier, patientPrefixSupplier));
    multiplePatientStream.print();
    PublishConnector.publishObservation(multiplePatientStream, getPublishedTopic());
}
Also used : TStream(com.ibm.streamsx.topology.TStream) ABPDiastolicDataGenerator(com.ibm.streamsx.health.simulate.beacon.generators.ABPDiastolicDataGenerator) TemperatureDataGenerator(com.ibm.streamsx.health.simulate.beacon.generators.TemperatureDataGenerator) SpO2DataGenerator(com.ibm.streamsx.health.simulate.beacon.generators.SpO2DataGenerator) HealthcareDataGenerator(com.ibm.streamsx.health.simulate.beacon.generators.HealthcareDataGenerator) Observation(com.ibm.streamsx.health.ingest.types.model.Observation) HeartRateDataGenerator(com.ibm.streamsx.health.simulate.beacon.generators.HeartRateDataGenerator) ABPSystolicDataGenerator(com.ibm.streamsx.health.simulate.beacon.generators.ABPSystolicDataGenerator) HashSet(java.util.HashSet)

Example 3 with TStream

use of com.ibm.streamsx.topology.TStream in project streamsx.topology by IBMStreams.

the class FunctionalSubmissionParamsTest method OthersTest.

@Test
@Ignore("Suddenly started failing on jenkins streamsx.topology - but only there (expected 100 got 0). Get the build working again.")
public void OthersTest() throws Exception {
    Topology topology = newTopology("OthersTest");
    // getConfig().put(ContextProperties.KEEP_ARTIFACTS, true);
    // FunctionFilter op is based on FunctionFunctor and the
    // latter is what knows about submission params.
    // Hence, given the implementation, this test should cover
    // all sub classes (modify,transform,split,sink,window,aggregate...).
    //
    // That really accounts for all functional
    // operators except FunctionSource and FunctionPeriodicSource and
    // we have tests for those.
    //
    // But we really should verify anyway...
    Supplier<Integer> someInt = topology.createSubmissionParameter("someInt", Integer.class);
    Supplier<Integer> someIntD = topology.createSubmissionParameter("someIntD", 2);
    List<Integer> data = Arrays.asList(new Integer[] { 1, 2, 3, 4, 5 });
    TStream<Integer> s = topology.constants(data);
    // The test's functional logic asserts it receives the expected SP value.
    // Its the main form of validation for the test.
    // TStream.modify
    TStream<Integer> modified = s.modify(unaryFn(someInt, 1));
    TStream<Integer> modifiedD = s.modify(unaryFn(someIntD, 2));
    // TStream.transform
    TStream<Integer> xformed = s.transform(functionFn(someInt, 1));
    TStream<Integer> xformedD = s.transform(functionFn(someIntD, 2));
    // TStream.multiTransform
    TStream<Integer> multiXformed = s.multiTransform(functionIterableFn(someInt, 1));
    TStream<Integer> multiXformedD = s.multiTransform(functionIterableFn(someIntD, 2));
    // TStream.join
    TStream<Integer> joined = s.join(s.last(1), biFunctionListFn(someInt, 1));
    TStream<Integer> joinedD = s.join(s.last(1), biFunctionListFn(someIntD, 2));
    TStream<Integer> lastJoined = s.joinLast(s, biFunctionFn(someInt, 1));
    TStream<Integer> lastJoinedD = s.joinLast(s, biFunctionFn(someIntD, 2));
    // TStream.sink
    s.sink(sinkerFn(someInt, 1));
    s.sink(sinkerFn(someIntD, 2));
    // TStream.split
    List<TStream<Integer>> split = s.split(2, toIntFn(someInt, 1));
    TStream<Integer> unionedSplit = split.get(0).union(split.get(1));
    List<TStream<Integer>> splitD = s.split(2, toIntFn(someIntD, 2));
    TStream<Integer> unionedSplitD = splitD.get(0).union(splitD.get(1));
    // TStream.window
    TWindow<Integer, ?> win = s.window(s.last(1).key(functionFn(someInt, 1)));
    TStream<Integer> winAgg = win.aggregate(functionListFn(someIntD, 2));
    TWindow<Integer, ?> winD = s.window(s.last(1).key(functionFn(someInt, 1)));
    TStream<Integer> winAggD = winD.aggregate(functionListFn(someIntD, 2));
    // TWindow.aggregate
    TStream<Integer> agg = s.last(1).aggregate(functionListFn(someInt, 1));
    TStream<Integer> aggD = s.last(1).aggregate(functionListFn(someIntD, 2));
    s.last(1).aggregate(functionListFn(someInt, 1), 1, TimeUnit.MILLISECONDS);
    s.last(1).aggregate(functionListFn(someIntD, 2), 1, TimeUnit.MILLISECONDS);
    // TWindow.key
    TStream<Integer> kagg = s.last(1).key(functionFn(someInt, 1)).aggregate(functionListFn(someIntD, 2));
    TStream<Integer> kaggD = s.last(1).key(functionFn(someInt, 1)).aggregate(functionListFn(someIntD, 2));
    Map<String, Object> params = new HashMap<>();
    params.put("someInt", 1);
    getConfig().put(ContextProperties.SUBMISSION_PARAMS, params);
    ////////////////////
    Set<TStream<Integer>> all = new HashSet<>(Arrays.asList(modified, modifiedD, xformed, xformedD, multiXformed, multiXformedD, joined, joinedD, lastJoined, lastJoinedD, unionedSplit, unionedSplitD, winAgg, winAggD, agg, aggD, kagg, kaggD));
    TStream<Integer> union = modified.union(all);
    // tester sees 0 tuples when they are really there so...
    union = union.filter(new AllowAll<Integer>());
    Tester tester = topology.getTester();
    Condition<Long> expectedCount = tester.tupleCount(union, all.size() * 5);
    complete(tester, expectedCount, 15, TimeUnit.SECONDS);
    assertTrue(expectedCount.toString(), expectedCount.valid());
}
Also used : Tester(com.ibm.streamsx.topology.tester.Tester) HashMap(java.util.HashMap) Topology(com.ibm.streamsx.topology.Topology) TestTopology(com.ibm.streamsx.topology.test.TestTopology) TStream(com.ibm.streamsx.topology.TStream) AllowAll(com.ibm.streamsx.topology.test.AllowAll) HashSet(java.util.HashSet) Ignore(org.junit.Ignore) Test(org.junit.Test)

Example 4 with TStream

use of com.ibm.streamsx.topology.TStream in project streamsx.kafka by IBMStreams.

the class KafkaOperatorsBlobTypeTest method kafkaBlobTypeTest.

@Test
public void kafkaBlobTypeTest() throws Exception {
    Topology topo = getTopology();
    StreamSchema schema = KafkaSPLStreamsUtils.BLOB_SCHEMA;
    // create the producer (produces tuples after a short delay)
    TStream<Blob> srcStream = topo.strings(Constants.STRING_DATA).transform(s -> ValueFactory.newBlob(s.getBytes())).modify(new Delay<>(5000));
    SPLStream splSrcStream = SPLStreams.convertStream(srcStream, new Converter(), schema);
    SPL.invokeSink(Constants.KafkaProducerOp, splSrcStream, getKafkaParams());
    // create the consumer
    SPLStream consumerStream = SPL.invokeSource(topo, Constants.KafkaConsumerOp, getKafkaParams(), schema);
    SPLStream msgStream = SPLStreams.stringToSPLStream(consumerStream.convert(t -> new String(t.getBlob("message").getData())));
    // test the output of the consumer
    StreamsContext<?> context = StreamsContextFactory.getStreamsContext(Type.DISTRIBUTED_TESTER);
    Tester tester = topo.getTester();
    Condition<List<String>> condition = KafkaSPLStreamsUtils.stringContentsUnordered(tester, msgStream, Constants.STRING_DATA);
    tester.complete(context, new HashMap<>(), condition, 30, TimeUnit.SECONDS);
    // check the results
    Assert.assertTrue(condition.getResult().size() > 0);
    Assert.assertTrue(condition.getResult().toString(), condition.valid());
}
Also used : TStream(com.ibm.streamsx.topology.TStream) Tester(com.ibm.streamsx.topology.tester.Tester) Delay(com.ibm.streamsx.kafka.test.utils.Delay) BiFunction(com.ibm.streamsx.topology.function.BiFunction) StreamsContextFactory(com.ibm.streamsx.topology.context.StreamsContextFactory) SPLStream(com.ibm.streamsx.topology.spl.SPLStream) HashMap(java.util.HashMap) Test(org.junit.Test) StreamSchema(com.ibm.streams.operator.StreamSchema) OutputTuple(com.ibm.streams.operator.OutputTuple) KafkaSPLStreamsUtils(com.ibm.streamsx.kafka.test.utils.KafkaSPLStreamsUtils) TimeUnit(java.util.concurrent.TimeUnit) List(java.util.List) Topology(com.ibm.streamsx.topology.Topology) StreamsContext(com.ibm.streamsx.topology.context.StreamsContext) Constants(com.ibm.streamsx.kafka.test.utils.Constants) Map(java.util.Map) SPL(com.ibm.streamsx.topology.spl.SPL) Condition(com.ibm.streamsx.topology.tester.Condition) Type(com.ibm.streamsx.topology.context.StreamsContext.Type) SPLStreams(com.ibm.streamsx.topology.spl.SPLStreams) Blob(com.ibm.streams.operator.types.Blob) Assert(org.junit.Assert) ValueFactory(com.ibm.streams.operator.types.ValueFactory) Blob(com.ibm.streams.operator.types.Blob) Tester(com.ibm.streamsx.topology.tester.Tester) Topology(com.ibm.streamsx.topology.Topology) StreamSchema(com.ibm.streams.operator.StreamSchema) SPLStream(com.ibm.streamsx.topology.spl.SPLStream) List(java.util.List) Test(org.junit.Test)

Example 5 with TStream

use of com.ibm.streamsx.topology.TStream in project streamsx.kafka by IBMStreams.

the class KafkaOperatorsDoubleTypeTest method kafkaDoubleTypeTest.

@Test
public void kafkaDoubleTypeTest() throws Exception {
    Topology topo = getTopology();
    StreamSchema schema = KafkaSPLStreamsUtils.DOUBLE_SCHEMA;
    // create the producer (produces tuples after a short delay)
    TStream<Double> srcStream = topo.strings(DATA).transform(s -> Double.valueOf(s)).modify(new Delay<>(5000));
    SPLStream splSrcStream = SPLStreams.convertStream(srcStream, new Converter(), schema);
    SPL.invokeSink(Constants.KafkaProducerOp, splSrcStream, getKafkaParams());
    // create the consumer
    SPLStream consumerStream = SPL.invokeSource(topo, Constants.KafkaConsumerOp, getKafkaParams(), schema);
    SPLStream msgStream = SPLStreams.stringToSPLStream(consumerStream.convert(t -> String.valueOf(t.getDouble("message"))));
    // test the output of the consumer
    StreamsContext<?> context = StreamsContextFactory.getStreamsContext(Type.DISTRIBUTED_TESTER);
    Tester tester = topo.getTester();
    Condition<List<String>> condition = KafkaSPLStreamsUtils.stringContentsUnordered(tester, msgStream, DATA);
    tester.complete(context, new HashMap<>(), condition, 30, TimeUnit.SECONDS);
    // check the results
    Assert.assertTrue(condition.getResult().size() > 0);
    Assert.assertTrue(condition.getResult().toString(), condition.valid());
}
Also used : TStream(com.ibm.streamsx.topology.TStream) Tester(com.ibm.streamsx.topology.tester.Tester) Delay(com.ibm.streamsx.kafka.test.utils.Delay) BiFunction(com.ibm.streamsx.topology.function.BiFunction) StreamsContextFactory(com.ibm.streamsx.topology.context.StreamsContextFactory) SPLStream(com.ibm.streamsx.topology.spl.SPLStream) HashMap(java.util.HashMap) Test(org.junit.Test) StreamSchema(com.ibm.streams.operator.StreamSchema) OutputTuple(com.ibm.streams.operator.OutputTuple) KafkaSPLStreamsUtils(com.ibm.streamsx.kafka.test.utils.KafkaSPLStreamsUtils) TimeUnit(java.util.concurrent.TimeUnit) List(java.util.List) Topology(com.ibm.streamsx.topology.Topology) StreamsContext(com.ibm.streamsx.topology.context.StreamsContext) Constants(com.ibm.streamsx.kafka.test.utils.Constants) Map(java.util.Map) SPL(com.ibm.streamsx.topology.spl.SPL) Condition(com.ibm.streamsx.topology.tester.Condition) Type(com.ibm.streamsx.topology.context.StreamsContext.Type) SPLStreams(com.ibm.streamsx.topology.spl.SPLStreams) Assert(org.junit.Assert) Tester(com.ibm.streamsx.topology.tester.Tester) Topology(com.ibm.streamsx.topology.Topology) StreamSchema(com.ibm.streams.operator.StreamSchema) SPLStream(com.ibm.streamsx.topology.spl.SPLStream) List(java.util.List) Test(org.junit.Test)

Aggregations

TStream (com.ibm.streamsx.topology.TStream)51 Topology (com.ibm.streamsx.topology.Topology)43 Test (org.junit.Test)43 Tester (com.ibm.streamsx.topology.tester.Tester)40 List (java.util.List)35 TestTopology (com.ibm.streamsx.topology.test.TestTopology)30 Condition (com.ibm.streamsx.topology.tester.Condition)27 TimeUnit (java.util.concurrent.TimeUnit)26 HashSet (java.util.HashSet)25 HashMap (java.util.HashMap)21 SPLStream (com.ibm.streamsx.topology.spl.SPLStream)18 ArrayList (java.util.ArrayList)17 StreamsContext (com.ibm.streamsx.topology.context.StreamsContext)16 SPLStreams (com.ibm.streamsx.topology.spl.SPLStreams)15 Map (java.util.Map)15 StreamsContextFactory (com.ibm.streamsx.topology.context.StreamsContextFactory)14 SPL (com.ibm.streamsx.topology.spl.SPL)14 Assert.assertTrue (org.junit.Assert.assertTrue)14 Assume.assumeTrue (org.junit.Assume.assumeTrue)14 Type (com.ibm.streamsx.topology.context.StreamsContext.Type)13