Search in sources :

Example 1 with JobCleanupWork

use of org.apache.hyracks.control.cc.work.JobCleanupWork in project asterixdb by apache.

the class JobExecutor method abortJob.

private void abortJob(List<Exception> exceptions) {
    Set<TaskCluster> inProgressTaskClustersCopy = new HashSet<>(inProgressTaskClusters);
    for (TaskCluster tc : inProgressTaskClustersCopy) {
        abortTaskCluster(findLastTaskClusterAttempt(tc), TaskClusterAttempt.TaskClusterStatus.ABORTED);
    }
    assert inProgressTaskClusters.isEmpty();
    ccs.getWorkQueue().schedule(new JobCleanupWork(ccs.getJobManager(), jobRun.getJobId(), JobStatus.FAILURE, exceptions));
}
Also used : JobCleanupWork(org.apache.hyracks.control.cc.work.JobCleanupWork) TaskCluster(org.apache.hyracks.control.cc.job.TaskCluster) HashSet(java.util.HashSet)

Example 2 with JobCleanupWork

use of org.apache.hyracks.control.cc.work.JobCleanupWork in project asterixdb by apache.

the class JobExecutor method startRunnableActivityClusters.

private void startRunnableActivityClusters() throws HyracksException {
    Set<TaskCluster> taskClusterRoots = new HashSet<>();
    findRunnableTaskClusterRoots(taskClusterRoots, jobRun.getActivityClusterGraph().getActivityClusterMap().values());
    if (LOGGER.isLoggable(Level.FINE)) {
        LOGGER.fine("Runnable TC roots: " + taskClusterRoots + ", inProgressTaskClusters: " + inProgressTaskClusters);
    }
    if (taskClusterRoots.isEmpty() && inProgressTaskClusters.isEmpty()) {
        ccs.getWorkQueue().schedule(new JobCleanupWork(ccs.getJobManager(), jobRun.getJobId(), JobStatus.TERMINATED, null));
        return;
    }
    startRunnableTaskClusters(taskClusterRoots);
}
Also used : JobCleanupWork(org.apache.hyracks.control.cc.work.JobCleanupWork) TaskCluster(org.apache.hyracks.control.cc.job.TaskCluster) HashSet(java.util.HashSet)

Aggregations

HashSet (java.util.HashSet)2 TaskCluster (org.apache.hyracks.control.cc.job.TaskCluster)2 JobCleanupWork (org.apache.hyracks.control.cc.work.JobCleanupWork)2