Search in sources :

Example 1 with IController

use of edu.iu.dsc.tws.api.scheduler.IController in project twister2 by DSC-SPIDAL.

the class MPILauncher method launch.

@Override
public Twister2JobState launch(JobAPI.Job job) {
    LOG.log(Level.INFO, "Launching job for cluster {0}", MPIContext.clusterType(config));
    Twister2JobState state = new Twister2JobState(false);
    if (!configsOK()) {
        return state;
    }
    // distributing bundle if not running in shared file system
    if (!MPIContext.isSharedFs(config)) {
        LOG.info("Configured as NON SHARED file system. " + "Running bootstrap procedure to distribute files...");
        try {
            this.distributeJobFiles(job);
        } catch (IOException e) {
            LOG.log(Level.SEVERE, "Error in distributing job files", e);
            throw new RuntimeException("Error in distributing job files");
        }
    } else {
        LOG.info("Configured as SHARED file system. " + "Skipping bootstrap procedure & setting up working directory");
        if (!setupWorkingDirectory(job.getJobId())) {
            throw new RuntimeException("Failed to setup the directory");
        }
    }
    config = Config.newBuilder().putAll(config).put(SchedulerContext.WORKING_DIRECTORY, jobWorkingDirectory).build();
    JobMaster jobMaster = null;
    Thread jmThread = null;
    if (JobMasterContext.isJobMasterUsed(config) && JobMasterContext.jobMasterRunsInClient(config)) {
        // Since the job master is running on client we can collect job information
        state.setDetached(false);
        try {
            int port = NetworkUtils.getFreePort();
            String hostAddress = JobMasterContext.jobMasterIP(config);
            if (hostAddress == null) {
                hostAddress = ResourceSchedulerUtils.getHostIP(config);
            }
            // add the port and ip to config
            config = Config.newBuilder().putAll(config).put("__job_master_port__", port).put("__job_master_ip__", hostAddress).build();
            LOG.log(Level.INFO, String.format("Starting the job master: %s:%d", hostAddress, port));
            JobMasterAPI.NodeInfo jobMasterNodeInfo = NodeInfoUtils.createNodeInfo(hostAddress, "default", "default");
            IScalerPerCluster nullScaler = new NullScaler();
            JobMasterAPI.JobMasterState initialState = JobMasterAPI.JobMasterState.JM_STARTED;
            NullTerminator nt = new NullTerminator();
            jobMaster = new JobMaster(config, "0.0.0.0", port, nt, job, jobMasterNodeInfo, nullScaler, initialState);
            jobMaster.addShutdownHook(true);
            jmThread = jobMaster.startJobMasterThreaded();
        } catch (Twister2Exception e) {
            LOG.log(Level.SEVERE, "Exception when starting Job master: ", e);
            throw new RuntimeException(e);
        }
    }
    final boolean[] start = { false };
    // now start the controller, which will get the resources and start
    Thread controllerThread = new Thread(() -> {
        IController controller = new MPIController(true);
        controller.initialize(config);
        start[0] = controller.start(job);
    });
    controllerThread.setName("MPIController");
    controllerThread.start();
    // wait until the controller finishes
    try {
        controllerThread.join();
    } catch (InterruptedException ignore) {
    }
    // now lets wait on client
    if (jmThread != null && JobMasterContext.isJobMasterUsed(config) && JobMasterContext.jobMasterRunsInClient(config)) {
        try {
            jmThread.join();
        } catch (InterruptedException ignore) {
        }
    }
    if (jobMaster != null && jobMaster.getDriver() != null) {
        if (jobMaster.getDriver().getState() != DriverJobState.FAILED) {
            state.setJobstate(DriverJobState.COMPLETED);
        } else {
            state.setJobstate(jobMaster.getDriver().getState());
        }
        state.setFinalMessages(jobMaster.getDriver().getMessages());
    }
    state.setRequestGranted(start[0]);
    return state;
}
Also used : JobMaster(edu.iu.dsc.tws.master.server.JobMaster) Twister2Exception(edu.iu.dsc.tws.api.exceptions.Twister2Exception) IController(edu.iu.dsc.tws.api.scheduler.IController) NullScaler(edu.iu.dsc.tws.api.driver.NullScaler) IOException(java.io.IOException) IScalerPerCluster(edu.iu.dsc.tws.api.driver.IScalerPerCluster) JobMasterAPI(edu.iu.dsc.tws.proto.jobmaster.JobMasterAPI) Twister2JobState(edu.iu.dsc.tws.api.scheduler.Twister2JobState) NullTerminator(edu.iu.dsc.tws.rsched.schedulers.NullTerminator)

Example 2 with IController

use of edu.iu.dsc.tws.api.scheduler.IController in project twister2 by DSC-SPIDAL.

the class NomadLauncher method killJob.

@Override
public boolean killJob(String jobID) {
    LOG.log(Level.INFO, "Terminating job for cluster: ", NomadContext.clusterType(config));
    // get the job working directory
    String jobWorkingDirectory = NomadContext.workingDirectory(config);
    Config newConfig = Config.newBuilder().putAll(config).put(SchedulerContext.WORKING_DIRECTORY, jobWorkingDirectory).build();
    // now start the controller, which will get the resources from
    // slurm and start the job
    IController controller = new NomadController(true);
    controller.initialize(newConfig);
    jobWorkingDirectory = Paths.get(jobWorkingDirectory, jobID).toAbsolutePath().toString();
    String jobDescFile = JobUtils.getJobDescriptionFilePath(jobWorkingDirectory, jobID, config);
    JobAPI.Job job = JobUtils.readJobFile(jobDescFile);
    return controller.kill(job);
}
Also used : IController(edu.iu.dsc.tws.api.scheduler.IController) Config(edu.iu.dsc.tws.api.config.Config) JobAPI(edu.iu.dsc.tws.proto.system.job.JobAPI)

Example 3 with IController

use of edu.iu.dsc.tws.api.scheduler.IController in project twister2 by DSC-SPIDAL.

the class NomadMasterStarter method launch.

/**
 * launch the job master
 *
 * @return false if setup fails
 */
public boolean launch() {
    // get the job working directory
    String jobWorkingDirectory = NomadContext.workingDirectory(config);
    LOG.log(Level.INFO, "job working directory ....." + jobWorkingDirectory);
    if (NomadContext.sharedFileSystem(config)) {
        if (!setupWorkingDirectory(job, jobWorkingDirectory)) {
            throw new RuntimeException("Failed to setup the directory");
        }
    }
    Config newConfig = Config.newBuilder().putAll(config).put(SchedulerContext.WORKING_DIRECTORY, jobWorkingDirectory).build();
    // now start the controller, which will get the resources from
    // slurm and start the job
    IController controller = new NomadController(true);
    controller.initialize(newConfig);
    // start the Job Master locally
    JobMaster jobMaster = null;
    Thread jmThread = null;
    if (JobMasterContext.jobMasterRunsInClient(config)) {
        try {
            int port = JobMasterContext.jobMasterPort(config);
            String hostAddress = JobMasterContext.jobMasterIP(config);
            if (hostAddress == null) {
                hostAddress = InetAddress.getLocalHost().getHostAddress();
            }
            LOG.log(Level.INFO, String.format("Starting the job manager: %s:%d", hostAddress, port));
            JobMasterAPI.NodeInfo jobMasterNodeInfo = NomadContext.getNodeInfo(config, hostAddress);
            IScalerPerCluster clusterScaler = new NullScaler();
            JobMasterAPI.JobMasterState initialState = JobMasterAPI.JobMasterState.JM_STARTED;
            NullTerminator nt = new NullTerminator();
            jobMaster = new JobMaster(config, hostAddress, nt, job, jobMasterNodeInfo, clusterScaler, initialState);
            jobMaster.addShutdownHook(true);
            jmThread = jobMaster.startJobMasterThreaded();
        } catch (UnknownHostException e) {
            LOG.log(Level.SEVERE, "Exception when getting local host address: ", e);
            throw new RuntimeException(e);
        } catch (Twister2Exception e) {
            LOG.log(Level.SEVERE, "Exception when starting Job master: ", e);
            throw new RuntimeException(e);
        }
    }
    boolean start = controller.start(job);
    // now lets wait on client
    if (JobMasterContext.jobMasterRunsInClient(config)) {
        try {
            if (jmThread != null) {
                jmThread.join();
            }
        } catch (InterruptedException ignore) {
        }
    }
    return start;
}
Also used : JobMaster(edu.iu.dsc.tws.master.server.JobMaster) Twister2Exception(edu.iu.dsc.tws.api.exceptions.Twister2Exception) IController(edu.iu.dsc.tws.api.scheduler.IController) UnknownHostException(java.net.UnknownHostException) NullScaler(edu.iu.dsc.tws.api.driver.NullScaler) Config(edu.iu.dsc.tws.api.config.Config) IScalerPerCluster(edu.iu.dsc.tws.api.driver.IScalerPerCluster) NomadController(edu.iu.dsc.tws.rsched.schedulers.nomad.NomadController) JobMasterAPI(edu.iu.dsc.tws.proto.jobmaster.JobMasterAPI) NullTerminator(edu.iu.dsc.tws.rsched.schedulers.NullTerminator)

Aggregations

IController (edu.iu.dsc.tws.api.scheduler.IController)3 Config (edu.iu.dsc.tws.api.config.Config)2 IScalerPerCluster (edu.iu.dsc.tws.api.driver.IScalerPerCluster)2 NullScaler (edu.iu.dsc.tws.api.driver.NullScaler)2 Twister2Exception (edu.iu.dsc.tws.api.exceptions.Twister2Exception)2 JobMaster (edu.iu.dsc.tws.master.server.JobMaster)2 JobMasterAPI (edu.iu.dsc.tws.proto.jobmaster.JobMasterAPI)2 NullTerminator (edu.iu.dsc.tws.rsched.schedulers.NullTerminator)2 Twister2JobState (edu.iu.dsc.tws.api.scheduler.Twister2JobState)1 JobAPI (edu.iu.dsc.tws.proto.system.job.JobAPI)1 NomadController (edu.iu.dsc.tws.rsched.schedulers.nomad.NomadController)1 IOException (java.io.IOException)1 UnknownHostException (java.net.UnknownHostException)1