Search in sources :

Example 1 with NoopStageStatisticsCollector

use of co.cask.cdap.etl.common.NoopStageStatisticsCollector in project cdap by caskdata.

the class StreamingBatchSinkFunction method call.

@Override
public Void call(JavaRDD<T> data, Time batchTime) throws Exception {
    if (data.isEmpty()) {
        return null;
    }
    final long logicalStartTime = batchTime.milliseconds();
    MacroEvaluator evaluator = new DefaultMacroEvaluator(new BasicArguments(sec), logicalStartTime, sec.getSecureStore(), sec.getNamespace());
    PluginContext pluginContext = new SparkPipelinePluginContext(sec.getPluginContext(), sec.getMetrics(), stageSpec.isStageLoggingEnabled(), stageSpec.isProcessTimingEnabled());
    final SparkBatchSinkFactory sinkFactory = new SparkBatchSinkFactory();
    final String stageName = stageSpec.getName();
    final BatchSink<Object, Object, Object> batchSink = pluginContext.newPluginInstance(stageName, evaluator);
    final PipelineRuntime pipelineRuntime = new SparkPipelineRuntime(sec, logicalStartTime);
    boolean isPrepared = false;
    boolean isDone = false;
    try {
        sec.execute(new TxRunnable() {

            @Override
            public void run(DatasetContext datasetContext) throws Exception {
                SparkBatchSinkContext sinkContext = new SparkBatchSinkContext(sinkFactory, sec, datasetContext, pipelineRuntime, stageSpec);
                batchSink.prepareRun(sinkContext);
            }
        });
        isPrepared = true;
        PluginFunctionContext pluginFunctionContext = new PluginFunctionContext(stageSpec, sec, pipelineRuntime.getArguments().asMap(), batchTime.milliseconds(), new NoopStageStatisticsCollector());
        PairFlatMapFunc<T, Object, Object> sinkFunction = new BatchSinkFunction<T, Object, Object>(pluginFunctionContext);
        sinkFactory.writeFromRDD(data.flatMapToPair(Compat.convert(sinkFunction)), sec, stageName, Object.class, Object.class);
        isDone = true;
        sec.execute(new TxRunnable() {

            @Override
            public void run(DatasetContext datasetContext) throws Exception {
                SparkBatchSinkContext sinkContext = new SparkBatchSinkContext(sinkFactory, sec, datasetContext, pipelineRuntime, stageSpec);
                batchSink.onRunFinish(true, sinkContext);
            }
        });
    } catch (Exception e) {
        LOG.error("Error writing to sink {} for the batch for time {}.", stageName, logicalStartTime, e);
    } finally {
        if (isPrepared && !isDone) {
            sec.execute(new TxRunnable() {

                @Override
                public void run(DatasetContext datasetContext) throws Exception {
                    SparkBatchSinkContext sinkContext = new SparkBatchSinkContext(sinkFactory, sec, datasetContext, pipelineRuntime, stageSpec);
                    batchSink.onRunFinish(false, sinkContext);
                }
            });
        }
    }
    return null;
}
Also used : NoopStageStatisticsCollector(co.cask.cdap.etl.common.NoopStageStatisticsCollector) MacroEvaluator(co.cask.cdap.api.macro.MacroEvaluator) DefaultMacroEvaluator(co.cask.cdap.etl.common.DefaultMacroEvaluator) SparkPipelineRuntime(co.cask.cdap.etl.spark.SparkPipelineRuntime) PipelineRuntime(co.cask.cdap.etl.common.PipelineRuntime) SparkPipelinePluginContext(co.cask.cdap.etl.spark.plugin.SparkPipelinePluginContext) PluginContext(co.cask.cdap.api.plugin.PluginContext) SparkPipelineRuntime(co.cask.cdap.etl.spark.SparkPipelineRuntime) SparkBatchSinkContext(co.cask.cdap.etl.spark.batch.SparkBatchSinkContext) BatchSinkFunction(co.cask.cdap.etl.spark.function.BatchSinkFunction) SparkPipelinePluginContext(co.cask.cdap.etl.spark.plugin.SparkPipelinePluginContext) PluginFunctionContext(co.cask.cdap.etl.spark.function.PluginFunctionContext) SparkBatchSinkFactory(co.cask.cdap.etl.spark.batch.SparkBatchSinkFactory) TxRunnable(co.cask.cdap.api.TxRunnable) DefaultMacroEvaluator(co.cask.cdap.etl.common.DefaultMacroEvaluator) BasicArguments(co.cask.cdap.etl.common.BasicArguments) DatasetContext(co.cask.cdap.api.data.DatasetContext)

Example 2 with NoopStageStatisticsCollector

use of co.cask.cdap.etl.common.NoopStageStatisticsCollector in project cdap by caskdata.

the class DStreamCollection method compute.

@Override
public <U> SparkCollection<U> compute(final StageSpec stageSpec, SparkCompute<T, U> compute) throws Exception {
    final SparkCompute<T, U> wrappedCompute = new DynamicSparkCompute<>(new DynamicDriverContext(stageSpec, sec, new NoopStageStatisticsCollector()), compute);
    Transactionals.execute(sec, new TxRunnable() {

        @Override
        public void run(DatasetContext datasetContext) throws Exception {
            PipelineRuntime pipelineRuntime = new SparkPipelineRuntime(sec);
            SparkExecutionPluginContext sparkPluginContext = new BasicSparkExecutionPluginContext(sec, JavaSparkContext.fromSparkContext(stream.context().sparkContext()), datasetContext, pipelineRuntime, stageSpec);
            wrappedCompute.initialize(sparkPluginContext);
        }
    }, Exception.class);
    return wrap(stream.transform(new ComputeTransformFunction<>(sec, stageSpec, wrappedCompute)));
}
Also used : DynamicSparkCompute(co.cask.cdap.etl.spark.streaming.function.DynamicSparkCompute) NoopStageStatisticsCollector(co.cask.cdap.etl.common.NoopStageStatisticsCollector) ComputeTransformFunction(co.cask.cdap.etl.spark.streaming.function.ComputeTransformFunction) SparkPipelineRuntime(co.cask.cdap.etl.spark.SparkPipelineRuntime) PipelineRuntime(co.cask.cdap.etl.common.PipelineRuntime) SparkPipelineRuntime(co.cask.cdap.etl.spark.SparkPipelineRuntime) BasicSparkExecutionPluginContext(co.cask.cdap.etl.spark.batch.BasicSparkExecutionPluginContext) SparkExecutionPluginContext(co.cask.cdap.etl.api.batch.SparkExecutionPluginContext) BasicSparkExecutionPluginContext(co.cask.cdap.etl.spark.batch.BasicSparkExecutionPluginContext) TxRunnable(co.cask.cdap.api.TxRunnable) DatasetContext(co.cask.cdap.api.data.DatasetContext)

Example 3 with NoopStageStatisticsCollector

use of co.cask.cdap.etl.common.NoopStageStatisticsCollector in project cdap by caskdata.

the class DynamicDriverContext method readExternal.

@Override
public void readExternal(ObjectInput in) throws IOException, ClassNotFoundException {
    serializationVersion = in.readUTF();
    stageSpec = (StageSpec) in.readObject();
    sec = (JavaSparkExecutionContext) in.readObject();
    // we intentionally do not serialize this context in order to ensure that the runtime arguments
    // and logical start time are picked up from the JavaSparkExecutionContext. If we serialized it,
    // the arguments and start time of the very first pipeline run would get serialized, then
    // used for every subsequent run that loads from the checkpoint.
    pluginFunctionContext = new PluginFunctionContext(stageSpec, sec, new NoopStageStatisticsCollector());
}
Also used : PluginFunctionContext(co.cask.cdap.etl.spark.function.PluginFunctionContext) NoopStageStatisticsCollector(co.cask.cdap.etl.common.NoopStageStatisticsCollector)

Example 4 with NoopStageStatisticsCollector

use of co.cask.cdap.etl.common.NoopStageStatisticsCollector in project cdap by caskdata.

the class MapReduceTransformExecutorFactory method getMultiOutputTransform.

private <IN, ERROR> TrackedMultiOutputTransform<IN, ERROR> getMultiOutputTransform(StageSpec stageSpec) throws Exception {
    String stageName = stageSpec.getName();
    DefaultMacroEvaluator macroEvaluator = new DefaultMacroEvaluator(arguments, taskContext.getLogicalStartTime(), taskContext, taskContext.getNamespace());
    SplitterTransform<IN, ERROR> splitterTransform = pluginInstantiator.newPluginInstance(stageName, macroEvaluator);
    TransformContext transformContext = createRuntimeContext(stageSpec);
    splitterTransform.initialize(transformContext);
    StageMetrics stageMetrics = new DefaultStageMetrics(metrics, stageName);
    TaskAttemptContext taskAttemptContext = (TaskAttemptContext) taskContext.getHadoopContext();
    StageStatisticsCollector collector = isPipelineContainsCondition ? new MapReduceStageStatisticsCollector(stageName, taskAttemptContext) : new NoopStageStatisticsCollector();
    return new TrackedMultiOutputTransform<>(splitterTransform, stageMetrics, taskContext.getDataTracer(stageName), collector);
}
Also used : NoopStageStatisticsCollector(co.cask.cdap.etl.common.NoopStageStatisticsCollector) TaskAttemptContext(org.apache.hadoop.mapreduce.TaskAttemptContext) TrackedMultiOutputTransform(co.cask.cdap.etl.common.TrackedMultiOutputTransform) TransformContext(co.cask.cdap.etl.api.TransformContext) NoopStageStatisticsCollector(co.cask.cdap.etl.common.NoopStageStatisticsCollector) StageStatisticsCollector(co.cask.cdap.etl.common.StageStatisticsCollector) DefaultMacroEvaluator(co.cask.cdap.etl.common.DefaultMacroEvaluator) StageMetrics(co.cask.cdap.etl.api.StageMetrics) DefaultStageMetrics(co.cask.cdap.etl.common.DefaultStageMetrics) DefaultStageMetrics(co.cask.cdap.etl.common.DefaultStageMetrics)

Example 5 with NoopStageStatisticsCollector

use of co.cask.cdap.etl.common.NoopStageStatisticsCollector in project cdap by caskdata.

the class SparkPipelineRunner method runPipeline.

public void runPipeline(PipelinePhase pipelinePhase, String sourcePluginType, JavaSparkExecutionContext sec, Map<String, Integer> stagePartitions, PluginContext pluginContext, Map<String, StageStatisticsCollector> collectors) throws Exception {
    MacroEvaluator macroEvaluator = new DefaultMacroEvaluator(new BasicArguments(sec), sec.getLogicalStartTime(), sec, sec.getNamespace());
    Map<String, EmittedRecords> emittedRecords = new HashMap<>();
    // should never happen, but removes warning
    if (pipelinePhase.getDag() == null) {
        throw new IllegalStateException("Pipeline phase has no connections.");
    }
    for (String stageName : pipelinePhase.getDag().getTopologicalOrder()) {
        StageSpec stageSpec = pipelinePhase.getStage(stageName);
        // noinspection ConstantConditions
        String pluginType = stageSpec.getPluginType();
        EmittedRecords.Builder emittedBuilder = EmittedRecords.builder();
        // don't want to do an additional filter for stages that can emit errors,
        // but aren't connected to an ErrorTransform
        // similarly, don't want to do an additional filter for alerts when the stage isn't connected to
        // an AlertPublisher
        boolean hasErrorOutput = false;
        boolean hasAlertOutput = false;
        Set<String> outputs = pipelinePhase.getStageOutputs(stageSpec.getName());
        for (String output : outputs) {
            String outputPluginType = pipelinePhase.getStage(output).getPluginType();
            // noinspection ConstantConditions
            if (ErrorTransform.PLUGIN_TYPE.equals(outputPluginType)) {
                hasErrorOutput = true;
            } else if (AlertPublisher.PLUGIN_TYPE.equals(outputPluginType)) {
                hasAlertOutput = true;
            }
        }
        SparkCollection<Object> stageData = null;
        Map<String, SparkCollection<Object>> inputDataCollections = new HashMap<>();
        Set<String> stageInputs = pipelinePhase.getStageInputs(stageName);
        for (String inputStageName : stageInputs) {
            StageSpec inputStageSpec = pipelinePhase.getStage(inputStageName);
            if (inputStageSpec == null) {
                // means the input to this stage is in a separate phase. For example, it is an action.
                continue;
            }
            String port = null;
            // not errors or alerts or output port records
            if (!Constants.Connector.PLUGIN_TYPE.equals(inputStageSpec.getPluginType()) && !Constants.Connector.PLUGIN_TYPE.equals(pluginType)) {
                port = inputStageSpec.getOutputPorts().get(stageName).getPort();
            }
            SparkCollection<Object> inputRecords = port == null ? emittedRecords.get(inputStageName).outputRecords : emittedRecords.get(inputStageName).outputPortRecords.get(port);
            inputDataCollections.put(inputStageName, inputRecords);
        }
        // initialize the stageRDD as the union of all input RDDs.
        if (!inputDataCollections.isEmpty()) {
            Iterator<SparkCollection<Object>> inputCollectionIter = inputDataCollections.values().iterator();
            stageData = inputCollectionIter.next();
            // don't union inputs records if we're joining or if we're processing errors
            while (!BatchJoiner.PLUGIN_TYPE.equals(pluginType) && !ErrorTransform.PLUGIN_TYPE.equals(pluginType) && inputCollectionIter.hasNext()) {
                stageData = stageData.union(inputCollectionIter.next());
            }
        }
        boolean isConnectorSource = Constants.Connector.PLUGIN_TYPE.equals(pluginType) && pipelinePhase.getSources().contains(stageName);
        boolean isConnectorSink = Constants.Connector.PLUGIN_TYPE.equals(pluginType) && pipelinePhase.getSinks().contains(stageName);
        StageStatisticsCollector collector = collectors.get(stageName) == null ? new NoopStageStatisticsCollector() : collectors.get(stageName);
        PluginFunctionContext pluginFunctionContext = new PluginFunctionContext(stageSpec, sec, collector);
        if (stageData == null) {
            // null in the other else-if conditions
            if (sourcePluginType.equals(pluginType) || isConnectorSource) {
                SparkCollection<RecordInfo<Object>> combinedData = getSource(stageSpec, collector);
                emittedBuilder = addEmitted(emittedBuilder, pipelinePhase, stageSpec, combinedData, hasErrorOutput, hasAlertOutput);
            } else {
                throw new IllegalStateException(String.format("Stage '%s' has no input and is not a source.", stageName));
            }
        } else if (BatchSink.PLUGIN_TYPE.equals(pluginType) || isConnectorSink) {
            stageData.store(stageSpec, Compat.convert(new BatchSinkFunction(pluginFunctionContext)));
        } else if (Transform.PLUGIN_TYPE.equals(pluginType)) {
            SparkCollection<RecordInfo<Object>> combinedData = stageData.transform(stageSpec, collector);
            emittedBuilder = addEmitted(emittedBuilder, pipelinePhase, stageSpec, combinedData, hasErrorOutput, hasAlertOutput);
        } else if (SplitterTransform.PLUGIN_TYPE.equals(pluginType)) {
            SparkCollection<RecordInfo<Object>> combinedData = stageData.multiOutputTransform(stageSpec, collector);
            emittedBuilder = addEmitted(emittedBuilder, pipelinePhase, stageSpec, combinedData, hasErrorOutput, hasAlertOutput);
        } else if (ErrorTransform.PLUGIN_TYPE.equals(pluginType)) {
            // union all the errors coming into this stage
            SparkCollection<ErrorRecord<Object>> inputErrors = null;
            for (String inputStage : stageInputs) {
                SparkCollection<ErrorRecord<Object>> inputErrorsFromStage = emittedRecords.get(inputStage).errorRecords;
                if (inputErrorsFromStage == null) {
                    continue;
                }
                if (inputErrors == null) {
                    inputErrors = inputErrorsFromStage;
                } else {
                    inputErrors = inputErrors.union(inputErrorsFromStage);
                }
            }
            if (inputErrors != null) {
                SparkCollection<RecordInfo<Object>> combinedData = inputErrors.flatMap(stageSpec, Compat.convert(new ErrorTransformFunction<>(pluginFunctionContext)));
                emittedBuilder = addEmitted(emittedBuilder, pipelinePhase, stageSpec, combinedData, hasErrorOutput, hasAlertOutput);
            }
        } else if (SparkCompute.PLUGIN_TYPE.equals(pluginType)) {
            SparkCompute<Object, Object> sparkCompute = pluginContext.newPluginInstance(stageName, macroEvaluator);
            emittedBuilder = emittedBuilder.setOutput(stageData.compute(stageSpec, sparkCompute));
        } else if (SparkSink.PLUGIN_TYPE.equals(pluginType)) {
            SparkSink<Object> sparkSink = pluginContext.newPluginInstance(stageName, macroEvaluator);
            stageData.store(stageSpec, sparkSink);
        } else if (BatchAggregator.PLUGIN_TYPE.equals(pluginType)) {
            Integer partitions = stagePartitions.get(stageName);
            SparkCollection<RecordInfo<Object>> combinedData = stageData.aggregate(stageSpec, partitions, collector);
            emittedBuilder = addEmitted(emittedBuilder, pipelinePhase, stageSpec, combinedData, hasErrorOutput, hasAlertOutput);
        } else if (BatchJoiner.PLUGIN_TYPE.equals(pluginType)) {
            BatchJoiner<Object, Object, Object> joiner = pluginContext.newPluginInstance(stageName, macroEvaluator);
            BatchJoinerRuntimeContext joinerRuntimeContext = pluginFunctionContext.createBatchRuntimeContext();
            joiner.initialize(joinerRuntimeContext);
            Map<String, SparkPairCollection<Object, Object>> preJoinStreams = new HashMap<>();
            for (Map.Entry<String, SparkCollection<Object>> inputStreamEntry : inputDataCollections.entrySet()) {
                String inputStage = inputStreamEntry.getKey();
                SparkCollection<Object> inputStream = inputStreamEntry.getValue();
                preJoinStreams.put(inputStage, addJoinKey(stageSpec, inputStage, inputStream, collector));
            }
            Set<String> remainingInputs = new HashSet<>();
            remainingInputs.addAll(inputDataCollections.keySet());
            Integer numPartitions = stagePartitions.get(stageName);
            SparkPairCollection<Object, List<JoinElement<Object>>> joinedInputs = null;
            // inner join on required inputs
            for (final String inputStageName : joiner.getJoinConfig().getRequiredInputs()) {
                SparkPairCollection<Object, Object> preJoinCollection = preJoinStreams.get(inputStageName);
                if (joinedInputs == null) {
                    joinedInputs = preJoinCollection.mapValues(new InitialJoinFunction<>(inputStageName));
                } else {
                    JoinFlattenFunction<Object> joinFlattenFunction = new JoinFlattenFunction<>(inputStageName);
                    joinedInputs = numPartitions == null ? joinedInputs.join(preJoinCollection).mapValues(joinFlattenFunction) : joinedInputs.join(preJoinCollection, numPartitions).mapValues(joinFlattenFunction);
                }
                remainingInputs.remove(inputStageName);
            }
            // outer join on non-required inputs
            boolean isFullOuter = joinedInputs == null;
            for (final String inputStageName : remainingInputs) {
                SparkPairCollection<Object, Object> preJoinStream = preJoinStreams.get(inputStageName);
                if (joinedInputs == null) {
                    joinedInputs = preJoinStream.mapValues(new InitialJoinFunction<>(inputStageName));
                } else {
                    if (isFullOuter) {
                        OuterJoinFlattenFunction<Object> flattenFunction = new OuterJoinFlattenFunction<>(inputStageName);
                        joinedInputs = numPartitions == null ? joinedInputs.fullOuterJoin(preJoinStream).mapValues(flattenFunction) : joinedInputs.fullOuterJoin(preJoinStream, numPartitions).mapValues(flattenFunction);
                    } else {
                        LeftJoinFlattenFunction<Object> flattenFunction = new LeftJoinFlattenFunction<>(inputStageName);
                        joinedInputs = numPartitions == null ? joinedInputs.leftOuterJoin(preJoinStream).mapValues(flattenFunction) : joinedInputs.leftOuterJoin(preJoinStream, numPartitions).mapValues(flattenFunction);
                    }
                }
            }
            // should never happen, but removes warnings
            if (joinedInputs == null) {
                throw new IllegalStateException("There are no inputs into join stage " + stageName);
            }
            emittedBuilder = emittedBuilder.setOutput(mergeJoinResults(stageSpec, joinedInputs, collector).cache());
        } else if (Windower.PLUGIN_TYPE.equals(pluginType)) {
            Windower windower = pluginContext.newPluginInstance(stageName, macroEvaluator);
            emittedBuilder = emittedBuilder.setOutput(stageData.window(stageSpec, windower));
        } else if (AlertPublisher.PLUGIN_TYPE.equals(pluginType)) {
            // union all the alerts coming into this stage
            SparkCollection<Alert> inputAlerts = null;
            for (String inputStage : stageInputs) {
                SparkCollection<Alert> inputErrorsFromStage = emittedRecords.get(inputStage).alertRecords;
                if (inputErrorsFromStage == null) {
                    continue;
                }
                if (inputAlerts == null) {
                    inputAlerts = inputErrorsFromStage;
                } else {
                    inputAlerts = inputAlerts.union(inputErrorsFromStage);
                }
            }
            if (inputAlerts != null) {
                inputAlerts.publishAlerts(stageSpec, collector);
            }
        } else {
            throw new IllegalStateException(String.format("Stage %s is of unsupported plugin type %s.", stageName, pluginType));
        }
        emittedRecords.put(stageName, emittedBuilder.build());
    }
}
Also used : HashMap(java.util.HashMap) PluginFunctionContext(co.cask.cdap.etl.spark.function.PluginFunctionContext) DefaultMacroEvaluator(co.cask.cdap.etl.common.DefaultMacroEvaluator) List(java.util.List) BasicArguments(co.cask.cdap.etl.common.BasicArguments) HashSet(java.util.HashSet) BatchJoinerRuntimeContext(co.cask.cdap.etl.api.batch.BatchJoinerRuntimeContext) RecordInfo(co.cask.cdap.etl.common.RecordInfo) NoopStageStatisticsCollector(co.cask.cdap.etl.common.NoopStageStatisticsCollector) StageStatisticsCollector(co.cask.cdap.etl.common.StageStatisticsCollector) LeftJoinFlattenFunction(co.cask.cdap.etl.spark.function.LeftJoinFlattenFunction) HashMap(java.util.HashMap) Map(java.util.Map) MacroEvaluator(co.cask.cdap.api.macro.MacroEvaluator) DefaultMacroEvaluator(co.cask.cdap.etl.common.DefaultMacroEvaluator) SparkCompute(co.cask.cdap.etl.api.batch.SparkCompute) StageSpec(co.cask.cdap.etl.spec.StageSpec) ErrorTransformFunction(co.cask.cdap.etl.spark.function.ErrorTransformFunction) NoopStageStatisticsCollector(co.cask.cdap.etl.common.NoopStageStatisticsCollector) Windower(co.cask.cdap.etl.api.streaming.Windower) JoinFlattenFunction(co.cask.cdap.etl.spark.function.JoinFlattenFunction) OuterJoinFlattenFunction(co.cask.cdap.etl.spark.function.OuterJoinFlattenFunction) LeftJoinFlattenFunction(co.cask.cdap.etl.spark.function.LeftJoinFlattenFunction) InitialJoinFunction(co.cask.cdap.etl.spark.function.InitialJoinFunction) BatchSinkFunction(co.cask.cdap.etl.spark.function.BatchSinkFunction) OuterJoinFlattenFunction(co.cask.cdap.etl.spark.function.OuterJoinFlattenFunction) Alert(co.cask.cdap.etl.api.Alert) ErrorRecord(co.cask.cdap.etl.api.ErrorRecord)

Aggregations

NoopStageStatisticsCollector (co.cask.cdap.etl.common.NoopStageStatisticsCollector)6 DefaultMacroEvaluator (co.cask.cdap.etl.common.DefaultMacroEvaluator)4 StageStatisticsCollector (co.cask.cdap.etl.common.StageStatisticsCollector)3 PluginFunctionContext (co.cask.cdap.etl.spark.function.PluginFunctionContext)3 TxRunnable (co.cask.cdap.api.TxRunnable)2 DatasetContext (co.cask.cdap.api.data.DatasetContext)2 MacroEvaluator (co.cask.cdap.api.macro.MacroEvaluator)2 StageMetrics (co.cask.cdap.etl.api.StageMetrics)2 BatchJoinerRuntimeContext (co.cask.cdap.etl.api.batch.BatchJoinerRuntimeContext)2 BasicArguments (co.cask.cdap.etl.common.BasicArguments)2 DefaultStageMetrics (co.cask.cdap.etl.common.DefaultStageMetrics)2 PipelineRuntime (co.cask.cdap.etl.common.PipelineRuntime)2 SparkPipelineRuntime (co.cask.cdap.etl.spark.SparkPipelineRuntime)2 BatchSinkFunction (co.cask.cdap.etl.spark.function.BatchSinkFunction)2 TaskAttemptContext (org.apache.hadoop.mapreduce.TaskAttemptContext)2 PluginContext (co.cask.cdap.api.plugin.PluginContext)1 Alert (co.cask.cdap.etl.api.Alert)1 ErrorRecord (co.cask.cdap.etl.api.ErrorRecord)1 TransformContext (co.cask.cdap.etl.api.TransformContext)1 Transformation (co.cask.cdap.etl.api.Transformation)1