Search in sources :

Example 1 with IPersistentVolume

use of edu.iu.dsc.tws.api.resource.IPersistentVolume in project twister2 by DSC-SPIDAL.

the class HadoopTSet method execute.

@Override
public void execute(Config config, JobAPI.Job job, IWorkerController workerController, IPersistentVolume persistentVolume, IVolatileVolume volatileVolume) {
    int workerId = workerController.getWorkerInfo().getWorkerID();
    WorkerEnvironment workerEnv = WorkerEnvironment.init(config, job, workerController, persistentVolume, volatileVolume);
    BatchEnvironment tSetEnv = TSetEnvironment.initBatch(workerEnv);
    Configuration configuration = new Configuration();
    configuration.addResource(new Path(HdfsDataContext.getHdfsConfigDirectory(config)));
    configuration.set(TextInputFormat.INPUT_DIR, "/input4");
    SourceTSet<String> source = tSetEnv.createHadoopSource(configuration, TextInputFormat.class, 4, new MapFunc<Tuple<LongWritable, Text>, String>() {

        @Override
        public String map(Tuple<LongWritable, Text> input) {
            return input.getKey().toString() + " : " + input.getValue().toString();
        }
    });
    SinkTSet<Iterator<String>> sink = source.direct().sink((SinkFunc<Iterator<String>>) value -> {
        while (value.hasNext()) {
            String next = value.next();
            LOG.info("Received value: " + next);
        }
        return true;
    });
    tSetEnv.run(sink);
}
Also used : Path(org.apache.hadoop.fs.Path) Twister2Job(edu.iu.dsc.tws.api.Twister2Job) HdfsDataContext(edu.iu.dsc.tws.data.utils.HdfsDataContext) ResourceAllocator(edu.iu.dsc.tws.rsched.core.ResourceAllocator) BatchEnvironment(edu.iu.dsc.tws.tset.env.BatchEnvironment) Text(org.apache.hadoop.io.Text) IPersistentVolume(edu.iu.dsc.tws.api.resource.IPersistentVolume) HashMap(java.util.HashMap) Config(edu.iu.dsc.tws.api.config.Config) MapFunc(edu.iu.dsc.tws.api.tset.fn.MapFunc) LongWritable(org.apache.hadoop.io.LongWritable) JobConfig(edu.iu.dsc.tws.api.JobConfig) TextInputFormat(org.apache.hadoop.mapreduce.lib.input.TextInputFormat) Configuration(org.apache.hadoop.conf.Configuration) Path(org.apache.hadoop.fs.Path) Tuple(edu.iu.dsc.tws.api.comms.structs.Tuple) Iterator(java.util.Iterator) IVolatileVolume(edu.iu.dsc.tws.api.resource.IVolatileVolume) SourceTSet(edu.iu.dsc.tws.tset.sets.batch.SourceTSet) SinkTSet(edu.iu.dsc.tws.tset.sets.batch.SinkTSet) JobAPI(edu.iu.dsc.tws.proto.system.job.JobAPI) Logger(java.util.logging.Logger) SinkFunc(edu.iu.dsc.tws.api.tset.fn.SinkFunc) Serializable(java.io.Serializable) Twister2Submitter(edu.iu.dsc.tws.rsched.job.Twister2Submitter) IWorker(edu.iu.dsc.tws.api.resource.IWorker) WorkerEnvironment(edu.iu.dsc.tws.api.resource.WorkerEnvironment) IWorkerController(edu.iu.dsc.tws.api.resource.IWorkerController) TSetEnvironment(edu.iu.dsc.tws.tset.env.TSetEnvironment) Configuration(org.apache.hadoop.conf.Configuration) BatchEnvironment(edu.iu.dsc.tws.tset.env.BatchEnvironment) Text(org.apache.hadoop.io.Text) WorkerEnvironment(edu.iu.dsc.tws.api.resource.WorkerEnvironment) Iterator(java.util.Iterator) LongWritable(org.apache.hadoop.io.LongWritable) Tuple(edu.iu.dsc.tws.api.comms.structs.Tuple)

Example 2 with IPersistentVolume

use of edu.iu.dsc.tws.api.resource.IPersistentVolume in project twister2 by DSC-SPIDAL.

the class MPIWorkerStarter method startWorker.

/**
 * Start the worker
 *
 * @param intracomm communication
 */
private void startWorker(Intracomm intracomm) {
    try {
        // initialize the logger
        initWorkerLogger(config, intracomm.getRank());
        // now create the worker
        IWorkerController wc = WorkerRuntime.getWorkerController();
        IPersistentVolume persistentVolume = initPersistenceVolume(config, globalRank);
        MPIContext.addRuntimeObject("comm", intracomm);
        IWorker worker = JobUtils.initializeIWorker(job);
        MPIWorkerManager workerManager = new MPIWorkerManager();
        workerManager.execute(config, job, wc, persistentVolume, null, worker);
    } catch (MPIException e) {
        LOG.log(Level.SEVERE, "Failed to synchronize the workers at the start");
        throw new RuntimeException(e);
    }
}
Also used : IPersistentVolume(edu.iu.dsc.tws.api.resource.IPersistentVolume) MPIException(mpi.MPIException) Twister2RuntimeException(edu.iu.dsc.tws.api.exceptions.Twister2RuntimeException) MPIWorkerManager(edu.iu.dsc.tws.rsched.worker.MPIWorkerManager) IWorkerController(edu.iu.dsc.tws.api.resource.IWorkerController) IWorker(edu.iu.dsc.tws.api.resource.IWorker)

Aggregations

IPersistentVolume (edu.iu.dsc.tws.api.resource.IPersistentVolume)2 IWorker (edu.iu.dsc.tws.api.resource.IWorker)2 IWorkerController (edu.iu.dsc.tws.api.resource.IWorkerController)2 JobConfig (edu.iu.dsc.tws.api.JobConfig)1 Twister2Job (edu.iu.dsc.tws.api.Twister2Job)1 Tuple (edu.iu.dsc.tws.api.comms.structs.Tuple)1 Config (edu.iu.dsc.tws.api.config.Config)1 Twister2RuntimeException (edu.iu.dsc.tws.api.exceptions.Twister2RuntimeException)1 IVolatileVolume (edu.iu.dsc.tws.api.resource.IVolatileVolume)1 WorkerEnvironment (edu.iu.dsc.tws.api.resource.WorkerEnvironment)1 MapFunc (edu.iu.dsc.tws.api.tset.fn.MapFunc)1 SinkFunc (edu.iu.dsc.tws.api.tset.fn.SinkFunc)1 HdfsDataContext (edu.iu.dsc.tws.data.utils.HdfsDataContext)1 JobAPI (edu.iu.dsc.tws.proto.system.job.JobAPI)1 ResourceAllocator (edu.iu.dsc.tws.rsched.core.ResourceAllocator)1 Twister2Submitter (edu.iu.dsc.tws.rsched.job.Twister2Submitter)1 MPIWorkerManager (edu.iu.dsc.tws.rsched.worker.MPIWorkerManager)1 BatchEnvironment (edu.iu.dsc.tws.tset.env.BatchEnvironment)1 TSetEnvironment (edu.iu.dsc.tws.tset.env.TSetEnvironment)1 SinkTSet (edu.iu.dsc.tws.tset.sets.batch.SinkTSet)1