Search in sources :

Example 1 with InputDataStreamer

use of edu.iu.dsc.tws.examples.ml.svm.streamer.InputDataStreamer in project twister2 by DSC-SPIDAL.

the class SvmSgdAdvancedRunner method executeIterativeTrainingGraph.

/**
 * This method executes the iterative training graph
 * Training is done in parallel depending on the parallelism factor given
 * In this implementation the data loading parallelism and data computing or
 * training parallelism is same. It is the general model to keep them equal. But
 * you can increase the parallelism the way you want. But it is adviced to keep these
 * values equal. Dynamic parallelism in training is not yet tested fully in Twister2 Framework.
 *
 * @return Twister2 DataObject{@literal <double[]>} containing the reduced weight vector
 */
public DataObject<double[]> executeIterativeTrainingGraph() {
    DataObject<double[]> trainedWeight = null;
    dataStreamer = new InputDataStreamer(this.operationMode, svmJobParameters.isDummy(), this.binaryBatchModel);
    iterativeSVMCompute = new IterativeSVMCompute(this.binaryBatchModel, this.operationMode);
    svmReduce = new SVMReduce(this.operationMode);
    trainingBuilder.addSource(Constants.SimpleGraphConfig.DATASTREAMER_SOURCE, dataStreamer, dataStreamerParallelism);
    ComputeConnection svmComputeConnection = trainingBuilder.addCompute(Constants.SimpleGraphConfig.SVM_COMPUTE, iterativeSVMCompute, svmComputeParallelism);
    ComputeConnection svmReduceConnection = trainingBuilder.addCompute(Constants.SimpleGraphConfig.SVM_REDUCE, svmReduce, reduceParallelism);
    svmComputeConnection.direct(Constants.SimpleGraphConfig.DATASTREAMER_SOURCE).viaEdge(Constants.SimpleGraphConfig.DATA_EDGE).withDataType(MessageTypes.OBJECT);
    // svmReduceConnection
    // .reduce(Constants.SimpleGraphConfig.SVM_COMPUTE, Constants.SimpleGraphConfig.REDUCE_EDGE,
    // new ReduceAggregator(), DataType.OBJECT);
    svmReduceConnection.allreduce(Constants.SimpleGraphConfig.SVM_COMPUTE).viaEdge(Constants.SimpleGraphConfig.REDUCE_EDGE).withReductionFunction(new ReduceAggregator()).withDataType(MessageTypes.OBJECT);
    trainingBuilder.setMode(operationMode);
    ComputeGraph graph = trainingBuilder.build();
    graph.setGraphName("training-graph");
    ExecutionPlan plan = taskExecutor.plan(graph);
    IExecutor ex = taskExecutor.createExecution(graph, plan);
    // iteration is being decoupled from the computation task
    for (int i = 0; i < this.binaryBatchModel.getIterations(); i++) {
        taskExecutor.addInput(graph, plan, Constants.SimpleGraphConfig.DATASTREAMER_SOURCE, Constants.SimpleGraphConfig.INPUT_DATA, trainingData);
        taskExecutor.addInput(graph, plan, Constants.SimpleGraphConfig.DATASTREAMER_SOURCE, Constants.SimpleGraphConfig.INPUT_WEIGHT_VECTOR, inputWeightVector);
        inputWeightVector = taskExecutor.getOutput(graph, plan, Constants.SimpleGraphConfig.SVM_REDUCE);
        ex.execute();
    }
    ex.closeExecution();
    LOG.info("Task Graph Executed !!! ");
    if (workerId == 0) {
        trainedWeight = retrieveWeightVectorFromTaskGraph(graph, plan);
        this.trainedWeightVector = trainedWeight;
    }
    return trainedWeight;
}
Also used : SVMReduce(edu.iu.dsc.tws.examples.ml.svm.aggregate.SVMReduce) ReduceAggregator(edu.iu.dsc.tws.examples.ml.svm.aggregate.ReduceAggregator) ExecutionPlan(edu.iu.dsc.tws.api.compute.executor.ExecutionPlan) IterativeSVMCompute(edu.iu.dsc.tws.examples.ml.svm.compute.IterativeSVMCompute) ComputeGraph(edu.iu.dsc.tws.api.compute.graph.ComputeGraph) IExecutor(edu.iu.dsc.tws.api.compute.executor.IExecutor) InputDataStreamer(edu.iu.dsc.tws.examples.ml.svm.streamer.InputDataStreamer) ComputeConnection(edu.iu.dsc.tws.task.impl.ComputeConnection)

Example 2 with InputDataStreamer

use of edu.iu.dsc.tws.examples.ml.svm.streamer.InputDataStreamer in project twister2 by DSC-SPIDAL.

the class SvmSgdAdvancedRunner method executeTrainingGraph.

/**
 * This method executes the training graph
 * Training is done in parallel depending on the parallelism factor given
 * In this implementation the data loading parallelism and data computing or
 * training parallelism is same. It is the general model to keep them equal. But
 * you can increase the parallelism the way you want. But it is adviced to keep these
 * values equal. Dynamic parallelism in training is not yet tested fully in Twister2 Framework.
 *
 * @return Twister2 DataObject{@literal <double[]>} containing the reduced weight vector
 */
public DataObject<double[]> executeTrainingGraph() {
    DataObject<double[]> trainedWeight = null;
    dataStreamer = new InputDataStreamer(this.operationMode, svmJobParameters.isDummy(), this.binaryBatchModel);
    svmCompute = new SVMCompute(this.binaryBatchModel, this.operationMode);
    svmReduce = new SVMReduce(this.operationMode);
    trainingBuilder.addSource(Constants.SimpleGraphConfig.DATASTREAMER_SOURCE, dataStreamer, dataStreamerParallelism);
    ComputeConnection svmComputeConnection = trainingBuilder.addCompute(Constants.SimpleGraphConfig.SVM_COMPUTE, svmCompute, svmComputeParallelism);
    ComputeConnection svmReduceConnection = trainingBuilder.addCompute(Constants.SimpleGraphConfig.SVM_REDUCE, svmReduce, reduceParallelism);
    svmComputeConnection.direct(Constants.SimpleGraphConfig.DATASTREAMER_SOURCE).viaEdge(Constants.SimpleGraphConfig.DATA_EDGE).withDataType(MessageTypes.OBJECT);
    // svmReduceConnection
    // .reduce(Constants.SimpleGraphConfig.SVM_COMPUTE, Constants.SimpleGraphConfig.REDUCE_EDGE,
    // new ReduceAggregator(), DataType.OBJECT);
    svmReduceConnection.allreduce(Constants.SimpleGraphConfig.SVM_COMPUTE).viaEdge(Constants.SimpleGraphConfig.REDUCE_EDGE).withReductionFunction(new ReduceAggregator()).withDataType(MessageTypes.OBJECT);
    trainingBuilder.setMode(operationMode);
    ComputeGraph graph = trainingBuilder.build();
    graph.setGraphName("training-graph");
    ExecutionPlan plan = taskExecutor.plan(graph);
    taskExecutor.addInput(graph, plan, Constants.SimpleGraphConfig.DATASTREAMER_SOURCE, Constants.SimpleGraphConfig.INPUT_DATA, trainingData);
    taskExecutor.addInput(graph, plan, Constants.SimpleGraphConfig.DATASTREAMER_SOURCE, Constants.SimpleGraphConfig.INPUT_WEIGHT_VECTOR, inputWeightVector);
    taskExecutor.execute(graph, plan);
    LOG.info("Task Graph Executed !!! ");
    if (workerId == 0) {
        trainedWeight = retrieveWeightVectorFromTaskGraph(graph, plan);
        this.trainedWeightVector = trainedWeight;
    }
    return trainedWeight;
}
Also used : SVMReduce(edu.iu.dsc.tws.examples.ml.svm.aggregate.SVMReduce) ReduceAggregator(edu.iu.dsc.tws.examples.ml.svm.aggregate.ReduceAggregator) ExecutionPlan(edu.iu.dsc.tws.api.compute.executor.ExecutionPlan) ComputeGraph(edu.iu.dsc.tws.api.compute.graph.ComputeGraph) IterativeSVMCompute(edu.iu.dsc.tws.examples.ml.svm.compute.IterativeSVMCompute) SVMCompute(edu.iu.dsc.tws.examples.ml.svm.compute.SVMCompute) InputDataStreamer(edu.iu.dsc.tws.examples.ml.svm.streamer.InputDataStreamer) ComputeConnection(edu.iu.dsc.tws.task.impl.ComputeConnection)

Aggregations

ExecutionPlan (edu.iu.dsc.tws.api.compute.executor.ExecutionPlan)2 ComputeGraph (edu.iu.dsc.tws.api.compute.graph.ComputeGraph)2 ReduceAggregator (edu.iu.dsc.tws.examples.ml.svm.aggregate.ReduceAggregator)2 SVMReduce (edu.iu.dsc.tws.examples.ml.svm.aggregate.SVMReduce)2 IterativeSVMCompute (edu.iu.dsc.tws.examples.ml.svm.compute.IterativeSVMCompute)2 InputDataStreamer (edu.iu.dsc.tws.examples.ml.svm.streamer.InputDataStreamer)2 ComputeConnection (edu.iu.dsc.tws.task.impl.ComputeConnection)2 IExecutor (edu.iu.dsc.tws.api.compute.executor.IExecutor)1 SVMCompute (edu.iu.dsc.tws.examples.ml.svm.compute.SVMCompute)1