Search in sources :

Example 1 with ClusterLiveNodesVerifier

use of org.apache.helix.tools.ClusterVerifiers.ClusterLiveNodesVerifier in project helix by apache.

the class IntegrationTestUtil method verifyLiveNodes.

public void verifyLiveNodes(String[] args) {
    if (args == null || args.length == 0) {
        System.err.println("Illegal arguments for " + verifyLiveNodes);
        return;
    }
    long timeoutValue = defaultTimeout;
    String clusterName = args[0];
    List<String> liveNodes = new ArrayList<String>();
    for (int i = 1; i < args.length; i++) {
        liveNodes.add(args[i]);
    }
    ClusterLiveNodesVerifier verifier = new ClusterLiveNodesVerifier(_zkclient, clusterName, liveNodes);
    boolean success = verifier.verify(timeoutValue);
    System.out.println(success ? "Successful" : "Failed");
}
Also used : ArrayList(java.util.ArrayList) ClusterLiveNodesVerifier(org.apache.helix.tools.ClusterVerifiers.ClusterLiveNodesVerifier)

Example 2 with ClusterLiveNodesVerifier

use of org.apache.helix.tools.ClusterVerifiers.ClusterLiveNodesVerifier in project helix by apache.

the class TestTaskRebalancerParallel method testWhenAllowOverlapJobAssignment.

/**
 * This test starts 4 jobs in job queue, the job all stuck, and verify that
 * (1) the number of running job does not exceed configured max allowed parallel jobs
 * (2) one instance can be assigned to multiple jobs in the workflow when allow overlap assignment
 */
@Test(dependsOnMethods = { "testWhenDisallowOverlapJobAssignment" })
public void testWhenAllowOverlapJobAssignment() throws Exception {
    // Disable all participants except one to enforce all assignment to be on one host
    for (int i = 1; i < _numNodes; i++) {
        _participants[i].syncStop();
    }
    ClusterLiveNodesVerifier verifier = new ClusterLiveNodesVerifier(_gZkClient, CLUSTER_NAME, Collections.singletonList(_participants[0].getInstanceName()));
    Assert.assertTrue(verifier.verify());
    String queueName = TestHelper.getTestMethodName();
    WorkflowConfig.Builder cfgBuilder = new WorkflowConfig.Builder(queueName);
    cfgBuilder.setParallelJobs(PARALLEL_COUNT);
    cfgBuilder.setAllowOverlapJobAssignment(true);
    JobQueue.Builder queueBuild = new JobQueue.Builder(queueName).setWorkflowConfig(cfgBuilder.build());
    JobQueue queue = queueBuild.build();
    _driver.createQueue(queue);
    // Create jobs that can be assigned to any instances
    List<JobConfig.Builder> jobConfigBuilders = new ArrayList<JobConfig.Builder>();
    for (int i = 0; i < PARALLEL_COUNT; i++) {
        List<TaskConfig> taskConfigs = new ArrayList<TaskConfig>();
        for (int j = 0; j < TASK_COUNT; j++) {
            taskConfigs.add(new TaskConfig.Builder().setTaskId("task_" + j).setCommand(MockTask.TASK_COMMAND).build());
        }
        jobConfigBuilders.add(new JobConfig.Builder().addTaskConfigs(taskConfigs));
    }
    _driver.stop(queueName);
    for (int i = 0; i < jobConfigBuilders.size(); ++i) {
        _driver.enqueueJob(queueName, "job_" + (i + 1), jobConfigBuilders.get(i));
    }
    _driver.resume(queueName);
    Thread.sleep(2000);
    Assert.assertTrue(TaskTestUtil.pollForWorkflowParallelState(_driver, queueName));
    for (int i = 1; i < _numNodes; i++) {
        _participants[i].syncStart();
    }
}
Also used : JobQueue(org.apache.helix.task.JobQueue) ArrayList(java.util.ArrayList) TaskConfig(org.apache.helix.task.TaskConfig) ClusterLiveNodesVerifier(org.apache.helix.tools.ClusterVerifiers.ClusterLiveNodesVerifier) JobConfig(org.apache.helix.task.JobConfig) WorkflowConfig(org.apache.helix.task.WorkflowConfig) Test(org.testng.annotations.Test)

Aggregations

ArrayList (java.util.ArrayList)2 ClusterLiveNodesVerifier (org.apache.helix.tools.ClusterVerifiers.ClusterLiveNodesVerifier)2 JobConfig (org.apache.helix.task.JobConfig)1 JobQueue (org.apache.helix.task.JobQueue)1 TaskConfig (org.apache.helix.task.TaskConfig)1 WorkflowConfig (org.apache.helix.task.WorkflowConfig)1 Test (org.testng.annotations.Test)1