Search in sources :

Example 1 with AccessExecutionJobVertex

use of org.apache.flink.runtime.executiongraph.AccessExecutionJobVertex in project flink by apache.

the class AbstractJobVertexRequestHandler method handleRequest.

@Override
public final String handleRequest(AccessExecutionGraph graph, Map<String, String> params) throws Exception {
    final JobVertexID vid = parseJobVertexId(params);
    final AccessExecutionJobVertex jobVertex = graph.getJobVertex(vid);
    if (jobVertex == null) {
        throw new IllegalArgumentException("No vertex with ID '" + vid + "' exists.");
    }
    return handleRequest(jobVertex, params);
}
Also used : AccessExecutionJobVertex(org.apache.flink.runtime.executiongraph.AccessExecutionJobVertex) JobVertexID(org.apache.flink.runtime.jobgraph.JobVertexID)

Example 2 with AccessExecutionJobVertex

use of org.apache.flink.runtime.executiongraph.AccessExecutionJobVertex in project flink by apache.

the class JobVertexBackPressureHandler method handleRequest.

@Override
public String handleRequest(AccessExecutionJobVertex accessJobVertex, Map<String, String> params) throws Exception {
    if (accessJobVertex instanceof ArchivedExecutionJobVertex) {
        return "";
    }
    ExecutionJobVertex jobVertex = (ExecutionJobVertex) accessJobVertex;
    try (StringWriter writer = new StringWriter();
        JsonGenerator gen = JsonFactory.jacksonFactory.createGenerator(writer)) {
        gen.writeStartObject();
        Option<OperatorBackPressureStats> statsOption = backPressureStatsTracker.getOperatorBackPressureStats(jobVertex);
        if (statsOption.isDefined()) {
            OperatorBackPressureStats stats = statsOption.get();
            // Check whether we need to refresh
            if (refreshInterval <= System.currentTimeMillis() - stats.getEndTimestamp()) {
                backPressureStatsTracker.triggerStackTraceSample(jobVertex);
                gen.writeStringField("status", "deprecated");
            } else {
                gen.writeStringField("status", "ok");
            }
            gen.writeStringField("backpressure-level", getBackPressureLevel(stats.getMaxBackPressureRatio()));
            gen.writeNumberField("end-timestamp", stats.getEndTimestamp());
            // Sub tasks
            gen.writeArrayFieldStart("subtasks");
            int numSubTasks = stats.getNumberOfSubTasks();
            for (int i = 0; i < numSubTasks; i++) {
                double ratio = stats.getBackPressureRatio(i);
                gen.writeStartObject();
                gen.writeNumberField("subtask", i);
                gen.writeStringField("backpressure-level", getBackPressureLevel(ratio));
                gen.writeNumberField("ratio", ratio);
                gen.writeEndObject();
            }
            gen.writeEndArray();
        } else {
            backPressureStatsTracker.triggerStackTraceSample(jobVertex);
            gen.writeStringField("status", "deprecated");
        }
        gen.writeEndObject();
        gen.close();
        return writer.toString();
    }
}
Also used : ArchivedExecutionJobVertex(org.apache.flink.runtime.executiongraph.ArchivedExecutionJobVertex) StringWriter(java.io.StringWriter) ExecutionJobVertex(org.apache.flink.runtime.executiongraph.ExecutionJobVertex) ArchivedExecutionJobVertex(org.apache.flink.runtime.executiongraph.ArchivedExecutionJobVertex) AccessExecutionJobVertex(org.apache.flink.runtime.executiongraph.AccessExecutionJobVertex) OperatorBackPressureStats(org.apache.flink.runtime.webmonitor.OperatorBackPressureStats) JsonGenerator(com.fasterxml.jackson.core.JsonGenerator)

Example 3 with AccessExecutionJobVertex

use of org.apache.flink.runtime.executiongraph.AccessExecutionJobVertex in project flink by apache.

the class JobDetailsHandlerTest method compareJobDetails.

private static void compareJobDetails(AccessExecutionGraph originalJob, String json) throws IOException {
    JsonNode result = ArchivedJobGenerationUtils.mapper.readTree(json);
    Assert.assertEquals(originalJob.getJobID().toString(), result.get("jid").asText());
    Assert.assertEquals(originalJob.getJobName(), result.get("name").asText());
    Assert.assertEquals(originalJob.isStoppable(), result.get("isStoppable").asBoolean());
    Assert.assertEquals(originalJob.getState().name(), result.get("state").asText());
    Assert.assertEquals(originalJob.getStatusTimestamp(JobStatus.CREATED), result.get("start-time").asLong());
    Assert.assertEquals(originalJob.getStatusTimestamp(originalJob.getState()), result.get("end-time").asLong());
    Assert.assertEquals(originalJob.getStatusTimestamp(originalJob.getState()) - originalJob.getStatusTimestamp(JobStatus.CREATED), result.get("duration").asLong());
    JsonNode timestamps = result.get("timestamps");
    for (JobStatus status : JobStatus.values()) {
        Assert.assertEquals(originalJob.getStatusTimestamp(status), timestamps.get(status.name()).asLong());
    }
    ArrayNode tasks = (ArrayNode) result.get("vertices");
    int x = 0;
    for (AccessExecutionJobVertex expectedTask : originalJob.getVerticesTopologically()) {
        JsonNode task = tasks.get(x);
        Assert.assertEquals(expectedTask.getJobVertexId().toString(), task.get("id").asText());
        Assert.assertEquals(expectedTask.getName(), task.get("name").asText());
        Assert.assertEquals(expectedTask.getParallelism(), task.get("parallelism").asInt());
        Assert.assertEquals(expectedTask.getAggregateState().name(), task.get("status").asText());
        Assert.assertEquals(3, task.get("start-time").asLong());
        Assert.assertEquals(5, task.get("end-time").asLong());
        Assert.assertEquals(2, task.get("duration").asLong());
        JsonNode subtasksPerState = task.get("tasks");
        Assert.assertEquals(0, subtasksPerState.get(ExecutionState.CREATED.name()).asInt());
        Assert.assertEquals(0, subtasksPerState.get(ExecutionState.SCHEDULED.name()).asInt());
        Assert.assertEquals(0, subtasksPerState.get(ExecutionState.DEPLOYING.name()).asInt());
        Assert.assertEquals(0, subtasksPerState.get(ExecutionState.RUNNING.name()).asInt());
        Assert.assertEquals(1, subtasksPerState.get(ExecutionState.FINISHED.name()).asInt());
        Assert.assertEquals(0, subtasksPerState.get(ExecutionState.CANCELING.name()).asInt());
        Assert.assertEquals(0, subtasksPerState.get(ExecutionState.CANCELED.name()).asInt());
        Assert.assertEquals(0, subtasksPerState.get(ExecutionState.FAILED.name()).asInt());
        long expectedNumBytesIn = 0;
        long expectedNumBytesOut = 0;
        long expectedNumRecordsIn = 0;
        long expectedNumRecordsOut = 0;
        for (AccessExecutionVertex vertex : expectedTask.getTaskVertices()) {
            IOMetrics ioMetrics = vertex.getCurrentExecutionAttempt().getIOMetrics();
            expectedNumBytesIn += ioMetrics.getNumBytesInLocal() + ioMetrics.getNumBytesInRemote();
            expectedNumBytesOut += ioMetrics.getNumBytesOut();
            expectedNumRecordsIn += ioMetrics.getNumRecordsIn();
            expectedNumRecordsOut += ioMetrics.getNumRecordsOut();
        }
        JsonNode metrics = task.get("metrics");
        Assert.assertEquals(expectedNumBytesIn, metrics.get("read-bytes").asLong());
        Assert.assertEquals(expectedNumBytesOut, metrics.get("write-bytes").asLong());
        Assert.assertEquals(expectedNumRecordsIn, metrics.get("read-records").asLong());
        Assert.assertEquals(expectedNumRecordsOut, metrics.get("write-records").asLong());
        x++;
    }
    Assert.assertEquals(1, tasks.size());
    JsonNode statusCounts = result.get("status-counts");
    Assert.assertEquals(0, statusCounts.get(ExecutionState.CREATED.name()).asInt());
    Assert.assertEquals(0, statusCounts.get(ExecutionState.SCHEDULED.name()).asInt());
    Assert.assertEquals(0, statusCounts.get(ExecutionState.DEPLOYING.name()).asInt());
    Assert.assertEquals(1, statusCounts.get(ExecutionState.RUNNING.name()).asInt());
    Assert.assertEquals(0, statusCounts.get(ExecutionState.FINISHED.name()).asInt());
    Assert.assertEquals(0, statusCounts.get(ExecutionState.CANCELING.name()).asInt());
    Assert.assertEquals(0, statusCounts.get(ExecutionState.CANCELED.name()).asInt());
    Assert.assertEquals(0, statusCounts.get(ExecutionState.FAILED.name()).asInt());
    Assert.assertEquals(ArchivedJobGenerationUtils.mapper.readTree(originalJob.getJsonPlan()), result.get("plan"));
}
Also used : JobStatus(org.apache.flink.runtime.jobgraph.JobStatus) AccessExecutionJobVertex(org.apache.flink.runtime.executiongraph.AccessExecutionJobVertex) JsonNode(com.fasterxml.jackson.databind.JsonNode) ArrayNode(com.fasterxml.jackson.databind.node.ArrayNode) IOMetrics(org.apache.flink.runtime.executiongraph.IOMetrics) AccessExecutionVertex(org.apache.flink.runtime.executiongraph.AccessExecutionVertex)

Example 4 with AccessExecutionJobVertex

use of org.apache.flink.runtime.executiongraph.AccessExecutionJobVertex in project flink by apache.

the class JobVertexAccumulatorsHandlerTest method testArchiver.

@Test
public void testArchiver() throws Exception {
    JsonArchivist archivist = new JobVertexAccumulatorsHandler.JobVertexAccumulatorsJsonArchivist();
    AccessExecutionGraph originalJob = ArchivedJobGenerationUtils.getTestJob();
    AccessExecutionJobVertex originalTask = ArchivedJobGenerationUtils.getTestTask();
    Collection<ArchivedJson> archives = archivist.archiveJsonWithPath(originalJob);
    Assert.assertEquals(1, archives.size());
    ArchivedJson archive = archives.iterator().next();
    Assert.assertEquals("/jobs/" + originalJob.getJobID() + "/vertices/" + originalTask.getJobVertexId() + "/accumulators", archive.getPath());
    compareAccumulators(originalTask, archive.getJson());
}
Also used : AccessExecutionJobVertex(org.apache.flink.runtime.executiongraph.AccessExecutionJobVertex) JsonArchivist(org.apache.flink.runtime.webmonitor.history.JsonArchivist) ArchivedJson(org.apache.flink.runtime.webmonitor.history.ArchivedJson) AccessExecutionGraph(org.apache.flink.runtime.executiongraph.AccessExecutionGraph) Test(org.junit.Test)

Example 5 with AccessExecutionJobVertex

use of org.apache.flink.runtime.executiongraph.AccessExecutionJobVertex in project flink by apache.

the class JobVertexDetailsHandlerTest method testJsonGeneration.

@Test
public void testJsonGeneration() throws Exception {
    AccessExecutionJobVertex originalTask = ArchivedJobGenerationUtils.getTestTask();
    String json = JobVertexDetailsHandler.createVertexDetailsJson(originalTask, ArchivedJobGenerationUtils.getTestJob().getJobID().toString(), null);
    compareVertexDetails(originalTask, json);
}
Also used : AccessExecutionJobVertex(org.apache.flink.runtime.executiongraph.AccessExecutionJobVertex) Test(org.junit.Test)

Aggregations

AccessExecutionJobVertex (org.apache.flink.runtime.executiongraph.AccessExecutionJobVertex)18 Test (org.junit.Test)13 AccessExecutionGraph (org.apache.flink.runtime.executiongraph.AccessExecutionGraph)8 ArchivedJson (org.apache.flink.runtime.webmonitor.history.ArchivedJson)7 JsonArchivist (org.apache.flink.runtime.webmonitor.history.JsonArchivist)7 AccessExecution (org.apache.flink.runtime.executiongraph.AccessExecution)5 AccessExecutionVertex (org.apache.flink.runtime.executiongraph.AccessExecutionVertex)5 JobStatus (org.apache.flink.runtime.jobgraph.JobStatus)3 JsonGenerator (com.fasterxml.jackson.core.JsonGenerator)2 StringWriter (java.io.StringWriter)2 ExecutionState (org.apache.flink.runtime.execution.ExecutionState)2 JsonNode (com.fasterxml.jackson.databind.JsonNode)1 ArrayNode (com.fasterxml.jackson.databind.node.ArrayNode)1 ArchivedExecutionJobVertex (org.apache.flink.runtime.executiongraph.ArchivedExecutionJobVertex)1 ExecutionJobVertex (org.apache.flink.runtime.executiongraph.ExecutionJobVertex)1 IOMetrics (org.apache.flink.runtime.executiongraph.IOMetrics)1 JobVertexID (org.apache.flink.runtime.jobgraph.JobVertexID)1 JobDetails (org.apache.flink.runtime.messages.webmonitor.JobDetails)1 OperatorBackPressureStats (org.apache.flink.runtime.webmonitor.OperatorBackPressureStats)1 MutableIOMetrics (org.apache.flink.runtime.webmonitor.utils.MutableIOMetrics)1