Search in sources :

Example 1 with DStreamCollection

use of io.cdap.cdap.etl.spark.streaming.DStreamCollection in project cdap by caskdata.

the class SparkStreamingPipelineRunner method mergeJoinResults.

@Override
protected SparkCollection<Object> mergeJoinResults(StageSpec stageSpec, FunctionCache.Factory functionCacheFactory, SparkPairCollection<Object, List<JoinElement<Object>>> joinedInputs, StageStatisticsCollector collector) throws Exception {
    DynamicDriverContext dynamicDriverContext = new DynamicDriverContext(stageSpec, sec, collector);
    JavaPairDStream<Object, List<JoinElement<Object>>> pairDStream = joinedInputs.getUnderlying();
    JavaDStream<Object> result = pairDStream.transform(new DynamicJoinMerge<>(dynamicDriverContext, functionCacheFactory.newCache()));
    return new DStreamCollection<>(sec, functionCacheFactory, result);
}
Also used : DStreamCollection(io.cdap.cdap.etl.spark.streaming.DStreamCollection) PairDStreamCollection(io.cdap.cdap.etl.spark.streaming.PairDStreamCollection) List(java.util.List) DynamicDriverContext(io.cdap.cdap.etl.spark.streaming.DynamicDriverContext)

Example 2 with DStreamCollection

use of io.cdap.cdap.etl.spark.streaming.DStreamCollection in project cdap by caskdata.

the class SparkStreamingPipelineRunner method getSource.

@Override
protected SparkCollection<RecordInfo<Object>> getSource(StageSpec stageSpec, FunctionCache.Factory functionCacheFactory, StageStatisticsCollector collector) throws Exception {
    StreamingSource<Object> source;
    if (checkpointsDisabled) {
        PluginFunctionContext pluginFunctionContext = new PluginFunctionContext(stageSpec, sec, collector);
        source = pluginFunctionContext.createPlugin();
    } else {
        // check for macros in any StreamingSource. If checkpoints are enabled,
        // SparkStreaming will serialize all InputDStreams created in the checkpoint, which means
        // the InputDStream is deserialized directly from the checkpoint instead of instantiated through CDAP.
        // This means there isn't any way for us to perform macro evaluation on sources when they are loaded from
        // checkpoints. We can work around this in all other pipeline stages by dynamically instantiating the
        // plugin in all DStream functions, but can't for InputDStreams because the InputDStream constructor
        // adds itself to the context dag. Yay for constructors with global side effects.
        // TODO: (HYDRATOR-1030) figure out how to do this at configure time instead of run time
        MacroEvaluator macroEvaluator = new ErrorMacroEvaluator("Due to spark limitations, macro evaluation is not allowed in streaming sources when checkpointing " + "is enabled.");
        PluginContext pluginContext = new SparkPipelinePluginContext(sec.getPluginContext(), sec.getMetrics(), spec.isStageLoggingEnabled(), spec.isProcessTimingEnabled());
        source = pluginContext.newPluginInstance(stageSpec.getName(), macroEvaluator);
    }
    DataTracer dataTracer = sec.getDataTracer(stageSpec.getName());
    StreamingContext sourceContext = new DefaultStreamingContext(stageSpec, sec, streamingContext);
    JavaDStream<Object> javaDStream = source.getStream(sourceContext);
    if (dataTracer.isEnabled()) {
        // it will create a new function for each RDD, which would limit each RDD but not the entire DStream.
        javaDStream = javaDStream.transform(new LimitingFunction<>(spec.getNumOfRecordsPreview()));
    }
    JavaDStream<RecordInfo<Object>> outputDStream = javaDStream.transform(new CountingTransformFunction<>(stageSpec.getName(), sec.getMetrics(), "records.out", dataTracer)).map(new WrapOutputTransformFunction<>(stageSpec.getName()));
    return new DStreamCollection<>(sec, functionCacheFactory, outputDStream);
}
Also used : DStreamCollection(io.cdap.cdap.etl.spark.streaming.DStreamCollection) PairDStreamCollection(io.cdap.cdap.etl.spark.streaming.PairDStreamCollection) StreamingContext(io.cdap.cdap.etl.api.streaming.StreamingContext) JavaStreamingContext(org.apache.spark.streaming.api.java.JavaStreamingContext) DefaultStreamingContext(io.cdap.cdap.etl.spark.streaming.DefaultStreamingContext) MacroEvaluator(io.cdap.cdap.api.macro.MacroEvaluator) SparkPipelinePluginContext(io.cdap.cdap.etl.spark.plugin.SparkPipelinePluginContext) PluginContext(io.cdap.cdap.api.plugin.PluginContext) RecordInfo(io.cdap.cdap.etl.common.RecordInfo) CountingTransformFunction(io.cdap.cdap.etl.spark.streaming.function.CountingTransformFunction) DefaultStreamingContext(io.cdap.cdap.etl.spark.streaming.DefaultStreamingContext) PluginFunctionContext(io.cdap.cdap.etl.spark.function.PluginFunctionContext) SparkPipelinePluginContext(io.cdap.cdap.etl.spark.plugin.SparkPipelinePluginContext) DataTracer(io.cdap.cdap.api.preview.DataTracer) LimitingFunction(io.cdap.cdap.etl.spark.streaming.function.preview.LimitingFunction)

Aggregations

DStreamCollection (io.cdap.cdap.etl.spark.streaming.DStreamCollection)2 PairDStreamCollection (io.cdap.cdap.etl.spark.streaming.PairDStreamCollection)2 MacroEvaluator (io.cdap.cdap.api.macro.MacroEvaluator)1 PluginContext (io.cdap.cdap.api.plugin.PluginContext)1 DataTracer (io.cdap.cdap.api.preview.DataTracer)1 StreamingContext (io.cdap.cdap.etl.api.streaming.StreamingContext)1 RecordInfo (io.cdap.cdap.etl.common.RecordInfo)1 PluginFunctionContext (io.cdap.cdap.etl.spark.function.PluginFunctionContext)1 SparkPipelinePluginContext (io.cdap.cdap.etl.spark.plugin.SparkPipelinePluginContext)1 DefaultStreamingContext (io.cdap.cdap.etl.spark.streaming.DefaultStreamingContext)1 DynamicDriverContext (io.cdap.cdap.etl.spark.streaming.DynamicDriverContext)1 CountingTransformFunction (io.cdap.cdap.etl.spark.streaming.function.CountingTransformFunction)1 LimitingFunction (io.cdap.cdap.etl.spark.streaming.function.preview.LimitingFunction)1 List (java.util.List)1 JavaStreamingContext (org.apache.spark.streaming.api.java.JavaStreamingContext)1