Search in sources :

Example 6 with ComputeGraph

use of edu.iu.dsc.tws.api.compute.graph.ComputeGraph in project twister2 by DSC-SPIDAL.

the class SourceTaskDataLoader method execute.

@Override
public void execute() {
    getParams();
    /*
     * First data is loaded from files
     * */
    ComputeGraphBuilder computeGraphBuilder = ComputeGraphBuilder.newBuilder(config);
    // DataObjectSource sourceTask = new DataObjectSource(Context.TWISTER2_DIRECT_EDGE,
    // dataSource);
    // DataObjectSink sinkTask = new DataObjectSink();
    // computeGraphBuilder.addSource("datapointsource", sourceTask, parallelism);
    // ComputeConnection firstGraphComputeConnection = computeGraphBuilder.addSink(
    // "datapointsink", sinkTask, parallelism);
    // firstGraphComputeConnection.direct("datapointsource",
    // Context.TWISTER2_DIRECT_EDGE, DataType.OBJECT);
    // computeGraphBuilder.setMode(OperationMode.BATCH);
    // 
    // ComputeGraph datapointsTaskGraph = computeGraphBuilder.build();
    // ExecutionPlan firstGraphExecutionPlan = taskExecutor.plan(datapointsTaskGraph);
    // taskExecutor.execute(datapointsTaskGraph, firstGraphExecutionPlan);
    // DataObject<Object> dataPointsObject = taskExecutor.getOutput(
    // datapointsTaskGraph, firstGraphExecutionPlan, "datapointsink");
    // LOG.info("Total Partitions : " + dataPointsObject.getPartitions().length);
    /*
     * Second Task
     * */
    DataSourceTask kMeansSourceTask = new DataSourceTask();
    SimpleDataAllReduceTask kMeansAllReduceTask = new SimpleDataAllReduceTask();
    computeGraphBuilder.addSource("kmeanssource", kMeansSourceTask, parallelism);
    ComputeConnection computeConnection = computeGraphBuilder.addCompute("kmeanssink", kMeansAllReduceTask, parallelism);
    computeConnection.allreduce("kmeanssource").viaEdge("all-reduce").withReductionFunction(new SimpleDataAggregator()).withDataType(MessageTypes.OBJECT);
    computeGraphBuilder.setMode(OperationMode.BATCH);
    ComputeGraph simpleTaskGraph = computeGraphBuilder.build();
    ExecutionPlan plan = taskExecutor.plan(simpleTaskGraph);
    // taskExecutor.addInput(
    // simpleTaskGraph, plan, "kmeanssource", "points", dataPointsObject);
    taskExecutor.execute(simpleTaskGraph, plan);
    DataObject<double[][]> dataSet = taskExecutor.getOutput(simpleTaskGraph, plan, "kmeanssink");
// DataObject<Object> dataSet = taskExecutor.getOutput(simpleTaskGraph, plan, "kmeanssink");
// DataPartition<Object> values = dataSet.getPartitions()[0];
// Object lastObject = values.getConsumer().next();
// LOG.info(String.format("Last Object : %s", lastObject.getClass().getGraphName()));
}
Also used : ExecutionPlan(edu.iu.dsc.tws.api.compute.executor.ExecutionPlan) ComputeGraph(edu.iu.dsc.tws.api.compute.graph.ComputeGraph) ComputeGraphBuilder(edu.iu.dsc.tws.task.impl.ComputeGraphBuilder) ComputeConnection(edu.iu.dsc.tws.task.impl.ComputeConnection)

Example 7 with ComputeGraph

use of edu.iu.dsc.tws.api.compute.graph.ComputeGraph in project twister2 by DSC-SPIDAL.

the class TaskWorkerDataLoader method execute.

@Override
public void execute() {
    getParams();
    ComputeGraphBuilder computeGraphBuilder = ComputeGraphBuilder.newBuilder(config);
    DataObjectSource sourceTask = new DataObjectSource(Context.TWISTER2_DIRECT_EDGE, dataSource);
    DataObjectSink sinkTask = new DataObjectSink();
    computeGraphBuilder.addSource("datapointsource", sourceTask, parallelism);
    ComputeConnection firstGraphComputeConnection = computeGraphBuilder.addCompute("datapointsink", sinkTask, parallelism);
    firstGraphComputeConnection.direct("datapointsource").viaEdge(Context.TWISTER2_DIRECT_EDGE).withDataType(MessageTypes.OBJECT);
    computeGraphBuilder.setMode(OperationMode.BATCH);
    ComputeGraph datapointsTaskGraph = computeGraphBuilder.build();
    ExecutionPlan firstGraphExecutionPlan = taskExecutor.plan(datapointsTaskGraph);
    taskExecutor.execute(datapointsTaskGraph, firstGraphExecutionPlan);
    DataObject<Object> dataPointsObject = taskExecutor.getOutput(datapointsTaskGraph, firstGraphExecutionPlan, "datapointsink");
    LOG.info("Total Partitions : " + dataPointsObject.getPartitions().length);
    showAllUnits(dataPointsObject);
}
Also used : DataObjectSink(edu.iu.dsc.tws.task.dataobjects.DataObjectSink) ExecutionPlan(edu.iu.dsc.tws.api.compute.executor.ExecutionPlan) ComputeGraph(edu.iu.dsc.tws.api.compute.graph.ComputeGraph) ComputeGraphBuilder(edu.iu.dsc.tws.task.impl.ComputeGraphBuilder) DataObject(edu.iu.dsc.tws.api.dataset.DataObject) DataObjectSource(edu.iu.dsc.tws.task.dataobjects.DataObjectSource) ComputeConnection(edu.iu.dsc.tws.task.impl.ComputeConnection)

Example 8 with ComputeGraph

use of edu.iu.dsc.tws.api.compute.graph.ComputeGraph in project twister2 by DSC-SPIDAL.

the class SvmSgdAdvancedRunner method executeTestingDataLoadingTaskGraph.

/**
 * This method loads the testing data
 * The loaded test data is used to evaluate the trained data
 * Testing data is loaded in parallel depending on the parallelism parameter given
 * There are partitions created equal to the parallelism
 * Later this will be used to do the testing in parallel in the testing task graph
 *
 * @return twister2 DataObject containing the testing data
 */
public DataObject<Object> executeTestingDataLoadingTaskGraph() {
    DataObject<Object> data = null;
    final String TEST_DATA_LOAD_EDGE_DIRECT = "direct2";
    DataObjectSource sourceTask1 = new DataObjectSource(TEST_DATA_LOAD_EDGE_DIRECT, this.svmJobParameters.getTestingDataDir());
    DataObjectSink sinkTask1 = new DataObjectSink();
    testingBuilder.addSource(Constants.SimpleGraphConfig.DATA_OBJECT_SOURCE_TESTING, sourceTask1, dataStreamerParallelism);
    ComputeConnection firstGraphComputeConnection1 = testingBuilder.addCompute(Constants.SimpleGraphConfig.DATA_OBJECT_SINK_TESTING, sinkTask1, dataStreamerParallelism);
    firstGraphComputeConnection1.direct(Constants.SimpleGraphConfig.DATA_OBJECT_SOURCE_TESTING).viaEdge(TEST_DATA_LOAD_EDGE_DIRECT).withDataType(MessageTypes.OBJECT);
    testingBuilder.setMode(OperationMode.BATCH);
    ComputeGraph datapointsTaskGraph1 = testingBuilder.build();
    datapointsTaskGraph1.setGraphName("testing-data-loading-graph");
    ExecutionPlan firstGraphExecutionPlan1 = taskExecutor.plan(datapointsTaskGraph1);
    taskExecutor.execute(datapointsTaskGraph1, firstGraphExecutionPlan1);
    data = taskExecutor.getOutput(datapointsTaskGraph1, firstGraphExecutionPlan1, Constants.SimpleGraphConfig.DATA_OBJECT_SINK_TESTING);
    if (data == null) {
        throw new NullPointerException("Something Went Wrong in Loading Testing Data");
    } else {
        LOG.info("Testing Data Total Partitions : " + data.getPartitions().length);
    }
    return data;
}
Also used : DataObjectSink(edu.iu.dsc.tws.task.dataobjects.DataObjectSink) ExecutionPlan(edu.iu.dsc.tws.api.compute.executor.ExecutionPlan) ComputeGraph(edu.iu.dsc.tws.api.compute.graph.ComputeGraph) DataObject(edu.iu.dsc.tws.api.dataset.DataObject) DataObjectSource(edu.iu.dsc.tws.task.dataobjects.DataObjectSource) ComputeConnection(edu.iu.dsc.tws.task.impl.ComputeConnection)

Example 9 with ComputeGraph

use of edu.iu.dsc.tws.api.compute.graph.ComputeGraph in project twister2 by DSC-SPIDAL.

the class SvmSgdAdvancedRunner method executeTestingTaskGraph.

/**
 * This method executes the testing taskgraph with testing data loaded from testing taskgraph
 * and uses the final weight vector obtained from the training task graph
 * Testing is also done in a parallel way. At the testing data loading stage we load the data
 * in parallel with reference to the given parallelism and testing is also in in parallel
 * Then we get test results for all these testing data partitions
 *
 * @return Returns the Accuracy value obtained
 */
public DataObject<Object> executeTestingTaskGraph() {
    DataObject<Object> data = null;
    predictionSourceTask = new PredictionSourceTask(svmJobParameters.isDummy(), this.binaryBatchModel, operationMode);
    predictionReduceTask = new PredictionReduceTask(operationMode);
    testingBuilder.addSource(Constants.SimpleGraphConfig.PREDICTION_SOURCE_TASK, predictionSourceTask, dataStreamerParallelism);
    ComputeConnection predictionReduceConnection = testingBuilder.addCompute(Constants.SimpleGraphConfig.PREDICTION_REDUCE_TASK, predictionReduceTask, reduceParallelism);
    predictionReduceConnection.reduce(Constants.SimpleGraphConfig.PREDICTION_SOURCE_TASK).viaEdge(Constants.SimpleGraphConfig.PREDICTION_EDGE).withReductionFunction(new PredictionAggregator()).withDataType(MessageTypes.OBJECT);
    testingBuilder.setMode(operationMode);
    ComputeGraph predictionGraph = testingBuilder.build();
    predictionGraph.setGraphName("testing-graph");
    ExecutionPlan predictionPlan = taskExecutor.plan(predictionGraph);
    // adding test data set
    taskExecutor.addInput(predictionGraph, predictionPlan, Constants.SimpleGraphConfig.PREDICTION_SOURCE_TASK, Constants.SimpleGraphConfig.TEST_DATA, testingData);
    // adding final weight vector
    taskExecutor.addInput(predictionGraph, predictionPlan, Constants.SimpleGraphConfig.PREDICTION_SOURCE_TASK, Constants.SimpleGraphConfig.FINAL_WEIGHT_VECTOR, trainedWeightVector);
    taskExecutor.execute(predictionGraph, predictionPlan);
    data = retrieveTestingAccuracyObject(predictionGraph, predictionPlan);
    return data;
}
Also used : ExecutionPlan(edu.iu.dsc.tws.api.compute.executor.ExecutionPlan) ComputeGraph(edu.iu.dsc.tws.api.compute.graph.ComputeGraph) PredictionReduceTask(edu.iu.dsc.tws.examples.ml.svm.test.PredictionReduceTask) DataObject(edu.iu.dsc.tws.api.dataset.DataObject) PredictionAggregator(edu.iu.dsc.tws.examples.ml.svm.test.PredictionAggregator) PredictionSourceTask(edu.iu.dsc.tws.examples.ml.svm.test.PredictionSourceTask) ComputeConnection(edu.iu.dsc.tws.task.impl.ComputeConnection)

Example 10 with ComputeGraph

use of edu.iu.dsc.tws.api.compute.graph.ComputeGraph in project twister2 by DSC-SPIDAL.

the class ExecutionPlanBuilder method build.

@Override
public ExecutionPlan build(Config cfg, ComputeGraph taskGraph, TaskSchedulePlan taskSchedule) {
    // we need to build the task plan
    LogicalPlan logicalPlan = TaskPlanBuilder.build(workerId, workerInfoList, taskSchedule, taskIdGenerator);
    ParallelOperationFactory opFactory = new ParallelOperationFactory(cfg, network, logicalPlan);
    Map<Integer, WorkerSchedulePlan> containersMap = taskSchedule.getContainersMap();
    WorkerSchedulePlan conPlan = containersMap.get(workerId);
    if (conPlan == null) {
        LOG.log(Level.INFO, "Cannot find worker in the task plan: " + workerId);
        return null;
    }
    ExecutionPlan execution = new ExecutionPlan();
    Set<TaskInstancePlan> instancePlan = conPlan.getTaskInstances();
    long tasksVersion = 0L;
    if (CheckpointingContext.isCheckpointingEnabled(cfg)) {
        Set<Integer> globalTasks = Collections.emptySet();
        if (workerId == 0) {
            globalTasks = containersMap.values().stream().flatMap(containerPlan -> containerPlan.getTaskInstances().stream()).filter(ip -> taskGraph.vertex(ip.getTaskName()).getTask() instanceof CheckpointableTask && !(taskGraph.vertex(ip.getTaskName()).getTask() instanceof CheckpointingSGatherSink)).map(TaskInstancePlan::getTaskId).collect(Collectors.toSet());
        }
        try {
            Checkpoint.FamilyInitializeResponse familyInitializeResponse = this.checkpointingClient.initFamily(workerId, containersMap.size(), taskGraph.getGraphName(), globalTasks);
            tasksVersion = familyInitializeResponse.getVersion();
        } catch (BlockingSendException e) {
            throw new RuntimeException("Failed to register tasks with Checkpoint Manager", e);
        }
        LOG.info("Tasks will start with version " + tasksVersion);
    }
    // for each task we are going to create the communications
    for (TaskInstancePlan ip : instancePlan) {
        Vertex v = taskGraph.vertex(ip.getTaskName());
        Map<String, Set<String>> inEdges = new HashMap<>();
        Map<String, String> outEdges = new HashMap<>();
        if (v == null) {
            throw new RuntimeException("Non-existing task scheduled: " + ip.getTaskName());
        }
        INode node = v.getTask();
        if (node instanceof ICompute || node instanceof ISource) {
            // lets get the communication
            Set<Edge> edges = taskGraph.outEdges(v);
            // now lets create the communication object
            for (Edge e : edges) {
                Vertex child = taskGraph.childOfTask(v, e.getName());
                // lets figure out the parents task id
                Set<Integer> srcTasks = taskIdGenerator.getTaskIds(v, ip.getTaskId());
                Set<Integer> tarTasks = taskIdGenerator.getTaskIds(child, getTaskIdOfTask(child.getName(), taskSchedule));
                Map<Integer, Integer> srcGlobalToIndex = taskIdGenerator.getGlobalTaskToIndex(v, ip.getTaskId());
                Map<Integer, Integer> tarGlobaToIndex = taskIdGenerator.getGlobalTaskToIndex(child, getTaskIdOfTask(child.getName(), taskSchedule));
                createCommunication(child, e, v, srcTasks, tarTasks, srcGlobalToIndex, tarGlobaToIndex);
                outEdges.put(e.getName(), child.getName());
            }
        }
        if (node instanceof ICompute) {
            // lets get the parent tasks
            Set<Edge> parentEdges = taskGraph.inEdges(v);
            for (Edge e : parentEdges) {
                Vertex parent = taskGraph.getParentOfTask(v, e.getName());
                // lets figure out the parents task id
                Set<Integer> srcTasks = taskIdGenerator.getTaskIds(parent, getTaskIdOfTask(parent.getName(), taskSchedule));
                Set<Integer> tarTasks = taskIdGenerator.getTaskIds(v, ip.getTaskId());
                Map<Integer, Integer> srcGlobalToIndex = taskIdGenerator.getGlobalTaskToIndex(parent, getTaskIdOfTask(parent.getName(), taskSchedule));
                Map<Integer, Integer> tarGlobalToIndex = taskIdGenerator.getGlobalTaskToIndex(v, ip.getTaskId());
                createCommunication(v, e, parent, srcTasks, tarTasks, srcGlobalToIndex, tarGlobalToIndex);
                // if we are a grouped edge, we have to use the group name
                String inEdge;
                if (e.getTargetEdge() == null) {
                    inEdge = e.getName();
                } else {
                    inEdge = e.getTargetEdge();
                }
                Set<String> parents = inEdges.get(inEdge);
                if (parents == null) {
                    parents = new HashSet<>();
                }
                parents.add(inEdge);
                inEdges.put(inEdge, parents);
            }
        }
        // lets create the instance
        INodeInstance iNodeInstance = createInstances(cfg, taskGraph.getGraphName(), ip, v, taskGraph.getOperationMode(), inEdges, outEdges, taskSchedule, tasksVersion);
        // add to execution
        execution.addNodes(v.getName(), taskIdGenerator.generateGlobalTaskId(ip.getTaskId(), ip.getTaskIndex()), iNodeInstance);
    }
    // now lets create the queues and start the execution
    for (Table.Cell<String, String, Communication> cell : parOpTable.cellSet()) {
        Communication c = cell.getValue();
        // lets create the communication
        OperationMode operationMode = taskGraph.getOperationMode();
        IParallelOperation op;
        assert c != null;
        c.build();
        if (c.getEdge().size() == 1) {
            op = opFactory.build(c.getEdge(0), c.getSourceTasks(), c.getTargetTasks(), operationMode, c.srcGlobalToIndex, c.tarGlobalToIndex);
        } else if (c.getEdge().size() > 1) {
            // just join op for now. Could change in the future
            // here the sources should be separated out for left and right edge
            Set<Integer> sourceTasks = c.getSourceTasks();
            Set<Integer> leftSources = new HashSet<>();
            Set<Integer> rightSources = new HashSet<>();
            if (!sourceTasks.isEmpty()) {
                // just to safely do .get() calls without isPresent()
                int minBin = (sourceTasks.stream().min(Integer::compareTo).get() / TaskIdGenerator.TASK_OFFSET) * TaskIdGenerator.TASK_OFFSET;
                for (Integer source : sourceTasks) {
                    if ((source / TaskIdGenerator.TASK_OFFSET) * TaskIdGenerator.TASK_OFFSET == minBin) {
                        leftSources.add(source);
                    } else {
                        rightSources.add(source);
                    }
                }
            }
            // now determine, which task is connected to which edge
            Edge leftEdge = c.getEdge(0);
            Edge rightEdge = c.getEdge(1);
            op = opFactory.build(leftEdge, rightEdge, leftSources, rightSources, c.getTargetTasks(), operationMode, c.srcGlobalToIndex, c.tarGlobalToIndex);
        } else {
            throw new RuntimeException("Cannot have communication with 0 edges");
        }
        // now lets check the sources and targets that are in this executor
        Set<Integer> sourcesOfThisWorker = intersectionOfTasks(conPlan, c.getSourceTasks());
        Set<Integer> targetsOfThisWorker = intersectionOfTasks(conPlan, c.getTargetTasks());
        // we use the target edge as the group name
        String targetEdge;
        if (c.getEdge().size() > 1) {
            targetEdge = c.getEdge(0).getTargetEdge();
        } else {
            targetEdge = c.getEdge(0).getName();
        }
        // so along with the operation mode, the windowing mode must be tested
        if (operationMode == OperationMode.STREAMING) {
            for (Integer i : sourcesOfThisWorker) {
                boolean found = false;
                // we can have multiple source tasks for an operation
                for (int sIndex = 0; sIndex < c.getSourceTask().size(); sIndex++) {
                    String sourceTask = c.getSourceTask().get(sIndex);
                    if (streamingTaskInstances.contains(sourceTask, i)) {
                        TaskStreamingInstance taskStreamingInstance = streamingTaskInstances.get(sourceTask, i);
                        taskStreamingInstance.registerOutParallelOperation(c.getEdge(sIndex).getName(), op);
                        op.registerSync(i, taskStreamingInstance);
                        found = true;
                    } else if (streamingSourceInstances.contains(sourceTask, i)) {
                        SourceStreamingInstance sourceStreamingInstance = streamingSourceInstances.get(sourceTask, i);
                        sourceStreamingInstance.registerOutParallelOperation(c.getEdge(sIndex).getName(), op);
                        found = true;
                    }
                    if (!found) {
                        throw new RuntimeException("Not found: " + c.getSourceTask());
                    }
                }
            }
            // we only have one target task always
            for (Integer i : targetsOfThisWorker) {
                if (streamingTaskInstances.contains(c.getTargetTask(), i)) {
                    TaskStreamingInstance taskStreamingInstance = streamingTaskInstances.get(c.getTargetTask(), i);
                    op.register(i, taskStreamingInstance.getInQueue());
                    taskStreamingInstance.registerInParallelOperation(targetEdge, op);
                    op.registerSync(i, taskStreamingInstance);
                } else {
                    throw new RuntimeException("Not found: " + c.getTargetTask());
                }
            }
            execution.addOps(op);
        }
        if (operationMode == OperationMode.BATCH) {
            for (Integer i : sourcesOfThisWorker) {
                boolean found = false;
                // we can have multiple source tasks for an operation
                for (int sIndex = 0; sIndex < c.getSourceTask().size(); sIndex++) {
                    String sourceTask = c.getSourceTask().get(sIndex);
                    if (batchTaskInstances.contains(sourceTask, i)) {
                        TaskBatchInstance taskBatchInstance = batchTaskInstances.get(sourceTask, i);
                        taskBatchInstance.registerOutParallelOperation(c.getEdge(sIndex).getName(), op);
                        found = true;
                    } else if (batchSourceInstances.contains(sourceTask, i)) {
                        SourceBatchInstance sourceBatchInstance = batchSourceInstances.get(sourceTask, i);
                        sourceBatchInstance.registerOutParallelOperation(c.getEdge(sIndex).getName(), op);
                        found = true;
                    }
                }
                if (!found) {
                    throw new RuntimeException("Not found: " + c.getSourceTask());
                }
            }
            for (Integer i : targetsOfThisWorker) {
                if (batchTaskInstances.contains(c.getTargetTask(), i)) {
                    TaskBatchInstance taskBatchInstance = batchTaskInstances.get(c.getTargetTask(), i);
                    op.register(i, taskBatchInstance.getInQueue());
                    taskBatchInstance.registerInParallelOperation(targetEdge, op);
                    op.registerSync(i, taskBatchInstance);
                } else {
                    throw new RuntimeException("Not found: " + c.getTargetTask());
                }
            }
            execution.addOps(op);
        }
    }
    return execution;
}
Also used : Checkpoint(edu.iu.dsc.tws.proto.checkpoint.Checkpoint) ComputeGraph(edu.iu.dsc.tws.api.compute.graph.ComputeGraph) LogicalPlan(edu.iu.dsc.tws.api.comms.LogicalPlan) IParallelOperation(edu.iu.dsc.tws.api.compute.executor.IParallelOperation) INode(edu.iu.dsc.tws.api.compute.nodes.INode) HashMap(java.util.HashMap) HashBasedTable(com.google.common.collect.HashBasedTable) Config(edu.iu.dsc.tws.api.config.Config) INodeInstance(edu.iu.dsc.tws.api.compute.executor.INodeInstance) ArrayList(java.util.ArrayList) Level(java.util.logging.Level) HashSet(java.util.HashSet) WorkerSchedulePlan(edu.iu.dsc.tws.api.compute.schedule.elements.WorkerSchedulePlan) JobMasterAPI(edu.iu.dsc.tws.proto.jobmaster.JobMasterAPI) ExecutionPlan(edu.iu.dsc.tws.api.compute.executor.ExecutionPlan) TaskSchedulePlan(edu.iu.dsc.tws.api.compute.schedule.elements.TaskSchedulePlan) Map(java.util.Map) CheckpointableTask(edu.iu.dsc.tws.checkpointing.task.CheckpointableTask) TaskInstancePlan(edu.iu.dsc.tws.api.compute.schedule.elements.TaskInstancePlan) BlockingSendException(edu.iu.dsc.tws.api.exceptions.net.BlockingSendException) ISource(edu.iu.dsc.tws.api.compute.nodes.ISource) TaskStreamingInstance(edu.iu.dsc.tws.executor.core.streaming.TaskStreamingInstance) TaskBatchInstance(edu.iu.dsc.tws.executor.core.batch.TaskBatchInstance) CheckpointingContext(edu.iu.dsc.tws.checkpointing.util.CheckpointingContext) SourceStreamingInstance(edu.iu.dsc.tws.executor.core.streaming.SourceStreamingInstance) IExecutionPlanBuilder(edu.iu.dsc.tws.api.compute.executor.IExecutionPlanBuilder) ICompute(edu.iu.dsc.tws.api.compute.nodes.ICompute) Set(java.util.Set) CheckpointingSGatherSink(edu.iu.dsc.tws.checkpointing.task.CheckpointingSGatherSink) Logger(java.util.logging.Logger) LinkedBlockingQueue(java.util.concurrent.LinkedBlockingQueue) Collectors(java.util.stream.Collectors) SourceBatchInstance(edu.iu.dsc.tws.executor.core.batch.SourceBatchInstance) Vertex(edu.iu.dsc.tws.api.compute.graph.Vertex) ArrayBlockingQueue(java.util.concurrent.ArrayBlockingQueue) Communicator(edu.iu.dsc.tws.api.comms.Communicator) List(java.util.List) CheckpointingClient(edu.iu.dsc.tws.api.checkpointing.CheckpointingClient) OperationMode(edu.iu.dsc.tws.api.compute.graph.OperationMode) Utils(edu.iu.dsc.tws.executor.util.Utils) ExecutorContext(edu.iu.dsc.tws.api.compute.executor.ExecutorContext) Edge(edu.iu.dsc.tws.api.compute.graph.Edge) Collections(java.util.Collections) Table(com.google.common.collect.Table) Vertex(edu.iu.dsc.tws.api.compute.graph.Vertex) INode(edu.iu.dsc.tws.api.compute.nodes.INode) HashSet(java.util.HashSet) Set(java.util.Set) HashMap(java.util.HashMap) CheckpointingSGatherSink(edu.iu.dsc.tws.checkpointing.task.CheckpointingSGatherSink) WorkerSchedulePlan(edu.iu.dsc.tws.api.compute.schedule.elements.WorkerSchedulePlan) ExecutionPlan(edu.iu.dsc.tws.api.compute.executor.ExecutionPlan) TaskInstancePlan(edu.iu.dsc.tws.api.compute.schedule.elements.TaskInstancePlan) ISource(edu.iu.dsc.tws.api.compute.nodes.ISource) OperationMode(edu.iu.dsc.tws.api.compute.graph.OperationMode) CheckpointableTask(edu.iu.dsc.tws.checkpointing.task.CheckpointableTask) SourceStreamingInstance(edu.iu.dsc.tws.executor.core.streaming.SourceStreamingInstance) HashBasedTable(com.google.common.collect.HashBasedTable) Table(com.google.common.collect.Table) INodeInstance(edu.iu.dsc.tws.api.compute.executor.INodeInstance) TaskBatchInstance(edu.iu.dsc.tws.executor.core.batch.TaskBatchInstance) Checkpoint(edu.iu.dsc.tws.proto.checkpoint.Checkpoint) Checkpoint(edu.iu.dsc.tws.proto.checkpoint.Checkpoint) BlockingSendException(edu.iu.dsc.tws.api.exceptions.net.BlockingSendException) TaskStreamingInstance(edu.iu.dsc.tws.executor.core.streaming.TaskStreamingInstance) IParallelOperation(edu.iu.dsc.tws.api.compute.executor.IParallelOperation) SourceBatchInstance(edu.iu.dsc.tws.executor.core.batch.SourceBatchInstance) LogicalPlan(edu.iu.dsc.tws.api.comms.LogicalPlan) ICompute(edu.iu.dsc.tws.api.compute.nodes.ICompute) Edge(edu.iu.dsc.tws.api.compute.graph.Edge)

Aggregations

ComputeGraph (edu.iu.dsc.tws.api.compute.graph.ComputeGraph)89 ComputeConnection (edu.iu.dsc.tws.task.impl.ComputeConnection)40 ComputeGraphBuilder (edu.iu.dsc.tws.task.impl.ComputeGraphBuilder)39 TaskSchedulerClassTest (edu.iu.dsc.tws.tsched.utils.TaskSchedulerClassTest)38 ExecutionPlan (edu.iu.dsc.tws.api.compute.executor.ExecutionPlan)32 TaskSchedulePlan (edu.iu.dsc.tws.api.compute.schedule.elements.TaskSchedulePlan)26 WorkerPlan (edu.iu.dsc.tws.api.compute.schedule.elements.WorkerPlan)25 Test (org.junit.Test)25 WorkerSchedulePlan (edu.iu.dsc.tws.api.compute.schedule.elements.WorkerSchedulePlan)22 Map (java.util.Map)22 TaskInstancePlan (edu.iu.dsc.tws.api.compute.schedule.elements.TaskInstancePlan)20 Config (edu.iu.dsc.tws.api.config.Config)18 DataObject (edu.iu.dsc.tws.api.dataset.DataObject)9 ComputeEnvironment (edu.iu.dsc.tws.task.ComputeEnvironment)9 DataFlowGraph (edu.iu.dsc.tws.task.cdfw.DataFlowGraph)8 IExecutor (edu.iu.dsc.tws.api.compute.executor.IExecutor)7 JobConfig (edu.iu.dsc.tws.api.JobConfig)5 DataObjectSource (edu.iu.dsc.tws.task.dataobjects.DataObjectSource)5 HashMap (java.util.HashMap)5 Path (edu.iu.dsc.tws.api.data.Path)4