Search in sources :

Example 1 with ActionScheduler

use of org.smartdata.model.action.ActionScheduler in project SSM by Intel-bigdata.

the class TestCompressDecompress method testDecompress.

@Test
public void testDecompress() throws Exception {
    int arraySize = 1024 * 1024 * 8;
    String filePath = "/ssm/compression/file4";
    prepareFile(filePath, arraySize);
    dfsClient.setStoragePolicy(filePath, "COLD");
    HdfsFileStatus fileStatusBefore = dfsClient.getFileInfo(filePath);
    CmdletManager cmdletManager = ssm.getCmdletManager();
    // Expect that a common file cannot be decompressed.
    List<ActionScheduler> schedulers = cmdletManager.getSchedulers("decompress");
    Assert.assertTrue(schedulers.size() == 1);
    ActionScheduler scheduler = schedulers.get(0);
    Assert.assertTrue(scheduler instanceof CompressionScheduler);
    Assert.assertFalse(((CompressionScheduler) scheduler).supportDecompression(filePath));
    // Compress the given file
    long cmdId = cmdletManager.submitCmdlet("compress -file " + filePath + " -codec " + codec);
    waitTillActionDone(cmdId);
    FileState fileState = HadoopUtil.getFileState(dfsClient, filePath);
    Assert.assertTrue(fileState instanceof CompressionFileState);
    // The storage policy should not be changed
    HdfsFileStatus fileStatusAfterCompress = dfsClient.getFileInfo(filePath);
    if (fileStatusBefore.getStoragePolicy() != 0) {
        // To make sure the consistency of storage policy
        Assert.assertEquals(fileStatusBefore.getStoragePolicy(), fileStatusAfterCompress.getStoragePolicy());
    }
    // Try to decompress a compressed file
    cmdId = cmdletManager.submitCmdlet("decompress -file " + filePath);
    waitTillActionDone(cmdId);
    fileState = HadoopUtil.getFileState(dfsClient, filePath);
    Assert.assertFalse(fileState instanceof CompressionFileState);
    // The storage policy should not be changed.
    HdfsFileStatus fileStatusAfterDeCompress = dfsClient.getFileInfo(filePath);
    if (fileStatusBefore.getStoragePolicy() != 0) {
        // To make sure the consistency of storage policy
        Assert.assertEquals(fileStatusBefore.getStoragePolicy(), fileStatusAfterDeCompress.getStoragePolicy());
    }
}
Also used : FileState(org.smartdata.model.FileState) CompressionFileState(org.smartdata.model.CompressionFileState) CmdletManager(org.smartdata.server.engine.CmdletManager) HdfsFileStatus(org.apache.hadoop.hdfs.protocol.HdfsFileStatus) ActionScheduler(org.smartdata.model.action.ActionScheduler) CompressionFileState(org.smartdata.model.CompressionFileState) CompressionScheduler(org.smartdata.hdfs.scheduler.CompressionScheduler) Test(org.junit.Test)

Example 2 with ActionScheduler

use of org.smartdata.model.action.ActionScheduler in project SSM by Intel-bigdata.

the class CmdletManager method scheduleCmdletActions.

private ScheduleResult scheduleCmdletActions(CmdletInfo info, LaunchCmdlet launchCmdlet) {
    List<Long> actIds = info.getAids();
    int idx = 0;
    int schIdx = 0;
    ActionInfo actionInfo;
    LaunchAction launchAction;
    List<ActionScheduler> actSchedulers;
    boolean skipped = false;
    ScheduleResult scheduleResult = ScheduleResult.SUCCESS_NO_EXECUTION;
    ScheduleResult resultTmp;
    for (idx = 0; idx < actIds.size(); idx++) {
        actionInfo = idToActions.get(actIds.get(idx));
        launchAction = launchCmdlet.getLaunchActions().get(idx);
        actSchedulers = schedulers.get(actionInfo.getActionName());
        if (actSchedulers == null || actSchedulers.size() == 0) {
            skipped = true;
            continue;
        }
        for (schIdx = 0; schIdx < actSchedulers.size(); schIdx++) {
            ActionScheduler s = actSchedulers.get(schIdx);
            try {
                resultTmp = s.onSchedule(info, actionInfo, launchCmdlet, launchAction, idx);
            } catch (Throwable t) {
                actionInfo.appendLogLine("\nOnSchedule exception: " + t);
                resultTmp = ScheduleResult.FAIL;
            }
            if (resultTmp != ScheduleResult.SUCCESS && resultTmp != ScheduleResult.SUCCESS_NO_EXECUTION) {
                scheduleResult = resultTmp;
            } else {
                if (scheduleResult == ScheduleResult.SUCCESS_NO_EXECUTION) {
                    scheduleResult = resultTmp;
                }
            }
            if (scheduleResult != ScheduleResult.SUCCESS && scheduleResult != ScheduleResult.SUCCESS_NO_EXECUTION) {
                break;
            }
        }
        if (scheduleResult != ScheduleResult.SUCCESS && scheduleResult != ScheduleResult.SUCCESS_NO_EXECUTION) {
            break;
        }
    }
    if (scheduleResult == ScheduleResult.SUCCESS || scheduleResult == ScheduleResult.SUCCESS_NO_EXECUTION) {
        idx--;
        schIdx--;
        if (skipped) {
            scheduleResult = ScheduleResult.SUCCESS;
        }
    }
    postscheduleCmdletActions(info, actIds, scheduleResult, idx, schIdx);
    return scheduleResult;
}
Also used : ScheduleResult(org.smartdata.model.action.ScheduleResult) LaunchAction(org.smartdata.model.LaunchAction) AtomicLong(java.util.concurrent.atomic.AtomicLong) ActionScheduler(org.smartdata.model.action.ActionScheduler) ActionInfo(org.smartdata.model.ActionInfo)

Example 3 with ActionScheduler

use of org.smartdata.model.action.ActionScheduler in project SSM by Intel-bigdata.

the class CmdletManager method onActionStatusUpdate.

public void onActionStatusUpdate(ActionStatus status) throws IOException, ActionException {
    if (status == null) {
        return;
    }
    long actionId = status.getActionId();
    if (idToActions.containsKey(actionId)) {
        ActionInfo actionInfo = idToActions.get(actionId);
        CmdletInfo cmdletInfo = idToCmdlets.get(status.getCmdletId());
        synchronized (actionInfo) {
            if (!actionInfo.isFinished()) {
                actionInfo.setLog(status.getLog());
                actionInfo.setResult(status.getResult());
                if (!status.isFinished()) {
                    actionInfo.setProgress(status.getPercentage());
                    if (actionInfo.getCreateTime() == 0) {
                        actionInfo.setCreateTime(cmdletInfo.getGenerateTime());
                    }
                    actionInfo.setFinishTime(System.currentTimeMillis());
                } else {
                    actionInfo.setProgress(1.0F);
                    actionInfo.setFinished(true);
                    actionInfo.setCreateTime(status.getStartTime());
                    actionInfo.setFinishTime(status.getFinishTime());
                    if (status.getThrowable() != null) {
                        actionInfo.setSuccessful(false);
                    } else {
                        actionInfo.setSuccessful(true);
                        updateStorageIfNeeded(actionInfo);
                    }
                    int actionIndex = 0;
                    for (long id : cmdletInfo.getAids()) {
                        if (id == actionId) {
                            break;
                        }
                        actionIndex++;
                    }
                    for (ActionScheduler p : schedulers.get(actionInfo.getActionName())) {
                        p.onActionFinished(cmdletInfo, actionInfo, actionIndex);
                    }
                }
            }
        }
    } else {
    // Updating action info which is not pending or running
    }
}
Also used : ActionScheduler(org.smartdata.model.action.ActionScheduler) ActionInfo(org.smartdata.model.ActionInfo) CmdletInfo(org.smartdata.model.CmdletInfo)

Example 4 with ActionScheduler

use of org.smartdata.model.action.ActionScheduler in project SSM by Intel-bigdata.

the class CmdletManager method postscheduleCmdletActions.

private void postscheduleCmdletActions(CmdletInfo cmdletInfo, List<Long> actions, ScheduleResult result, int lastAction, int lastScheduler) {
    List<ActionScheduler> actSchedulers;
    for (int aidx = lastAction; aidx >= 0; aidx--) {
        ActionInfo info = idToActions.get(actions.get(aidx));
        actSchedulers = schedulers.get(info.getActionName());
        if (actSchedulers == null || actSchedulers.size() == 0) {
            continue;
        }
        if (lastScheduler < 0) {
            lastScheduler = actSchedulers.size() - 1;
        }
        for (int sidx = lastScheduler; sidx >= 0; sidx--) {
            try {
                actSchedulers.get(sidx).postSchedule(cmdletInfo, info, sidx, result);
            } catch (Throwable t) {
                info.setLog((info.getLog() == null ? "" : info.getLog()) + "\nPostSchedule exception: " + t);
            }
        }
        lastScheduler = -1;
    }
}
Also used : ActionScheduler(org.smartdata.model.action.ActionScheduler) ActionInfo(org.smartdata.model.ActionInfo)

Example 5 with ActionScheduler

use of org.smartdata.model.action.ActionScheduler in project SSM by Intel-bigdata.

the class TestCacheScheduler method testCacheUncacheFile.

@Test(timeout = 100000)
public void testCacheUncacheFile() throws Exception {
    waitTillSSMExitSafeMode();
    String filePath = new String("/testFile");
    FSDataOutputStream out = dfs.create(new Path(filePath));
    out.writeChars("test content");
    out.close();
    CmdletManager cmdletManager = ssm.getCmdletManager();
    long cid = cmdletManager.submitCmdlet("cache -file " + filePath);
    while (true) {
        if (cmdletManager.getCmdletInfo(cid).getState().equals(CmdletState.DONE)) {
            break;
        }
    }
    RemoteIterator<CachePoolEntry> poolEntries = dfsClient.listCachePools();
    while (poolEntries.hasNext()) {
        CachePoolEntry poolEntry = poolEntries.next();
        if (poolEntry.getInfo().getPoolName().equals(CacheScheduler.SSM_POOL)) {
            return;
        }
        fail("A cache pool should be created by SSM: " + CacheScheduler.SSM_POOL);
    }
    // Currently, there is only one scheduler for cache action
    ActionScheduler actionScheduler = cmdletManager.getSchedulers("cache").get(0);
    assertTrue(actionScheduler instanceof CacheScheduler);
    Set<String> fileLock = ((CacheScheduler) actionScheduler).getFileLock();
    // There is no file locked after the action is finished.
    assertTrue(fileLock.isEmpty());
    long cid1 = cmdletManager.submitCmdlet("uncache -file " + filePath);
    while (true) {
        if (cmdletManager.getCmdletInfo(cid1).getState().equals(CmdletState.DONE)) {
            break;
        }
    }
    // There is no file locked after the action is finished.
    assertTrue(fileLock.isEmpty());
}
Also used : Path(org.apache.hadoop.fs.Path) CmdletManager(org.smartdata.server.engine.CmdletManager) ActionScheduler(org.smartdata.model.action.ActionScheduler) FSDataOutputStream(org.apache.hadoop.fs.FSDataOutputStream) CacheScheduler(org.smartdata.hdfs.scheduler.CacheScheduler) CachePoolEntry(org.apache.hadoop.hdfs.protocol.CachePoolEntry) Test(org.junit.Test)

Aggregations

ActionScheduler (org.smartdata.model.action.ActionScheduler)6 Test (org.junit.Test)3 ActionInfo (org.smartdata.model.ActionInfo)3 CmdletManager (org.smartdata.server.engine.CmdletManager)3 CompressionScheduler (org.smartdata.hdfs.scheduler.CompressionScheduler)2 AtomicLong (java.util.concurrent.atomic.AtomicLong)1 FSDataOutputStream (org.apache.hadoop.fs.FSDataOutputStream)1 Path (org.apache.hadoop.fs.Path)1 CachePoolEntry (org.apache.hadoop.hdfs.protocol.CachePoolEntry)1 HdfsFileStatus (org.apache.hadoop.hdfs.protocol.HdfsFileStatus)1 CacheScheduler (org.smartdata.hdfs.scheduler.CacheScheduler)1 CmdletInfo (org.smartdata.model.CmdletInfo)1 CompressionFileState (org.smartdata.model.CompressionFileState)1 FileState (org.smartdata.model.FileState)1 LaunchAction (org.smartdata.model.LaunchAction)1 ScheduleResult (org.smartdata.model.action.ScheduleResult)1