Search in sources :

Example 1 with JobLogDTO

use of io.hops.hopsworks.common.jobs.JobLogDTO in project hopsworks by logicalclocks.

the class AbstractExecutionController method getLog.

// ====================================================================================================================
// Execution logs
// ====================================================================================================================
@Override
public JobLogDTO getLog(Execution execution, JobLogDTO.LogType type) throws JobException {
    if (!execution.getState().isFinalState()) {
        throw new JobException(RESTCodes.JobErrorCode.JOB_EXECUTION_INVALID_STATE, Level.FINE, "Job still running.");
    }
    JobLogDTO dto = new JobLogDTO(type);
    DistributedFileSystemOps dfso = null;
    try {
        dfso = dfs.getDfsOps();
        String message;
        String stdPath;
        String path = (dto.getType() == JobLogDTO.LogType.OUT ? execution.getStdoutPath() : execution.getStderrPath());
        JobLogDTO.Retriable retriable = (dto.getType() == JobLogDTO.LogType.OUT ? JobLogDTO.Retriable.RETRIEABLE_OUT : JobLogDTO.Retriable.RETRIABLE_ERR);
        boolean status = (dto.getType() != JobLogDTO.LogType.OUT || execution.getFinalStatus().equals(JobFinalStatus.SUCCEEDED));
        String hdfsPath = REMOTE_PROTOCOL + path;
        if (!Strings.isNullOrEmpty(path) && dfso.exists(hdfsPath)) {
            Project project = execution.getJob().getProject();
            stdPath = path.split(project.getName())[1];
            int fileIndex = stdPath.lastIndexOf('/');
            String stdDirPath = stdPath.substring(0, fileIndex);
            dto.setPath(Settings.DIR_ROOT + File.separator + project.getName() + stdDirPath + File.separator + "std" + dto.getType().getName().toLowerCase() + ".log");
            if (dfso.listStatus(new org.apache.hadoop.fs.Path(hdfsPath))[0].getLen() > settings.getJobLogsDisplaySize()) {
                dto.setLog("Log is too big to display in browser. Click on the download button to get the log file.");
            } else {
                try (InputStream input = dfso.open(hdfsPath)) {
                    message = IOUtils.toString(input, "UTF-8");
                }
                dto.setLog(message.isEmpty() ? "No information." : message);
                if (message.isEmpty() && execution.getState().isFinalState() && execution.getAppId() != null && status) {
                    dto.setRetriable(retriable);
                }
            }
        } else {
            String logMsg = "No log available.";
            if (execution.getJob().getJobType() == JobType.PYTHON) {
                logMsg += " If job failed instantaneously, please check again later or try running the job again. Log " + "aggregation can take a few minutes to complete.";
                dto.setLog(logMsg);
            }
            if (execution.getState().isFinalState() && execution.getAppId() != null && status) {
                dto.setRetriable(retriable);
            }
        }
    } catch (IOException ex) {
        LOGGER.log(Level.SEVERE, null, ex);
    } finally {
        if (dfso != null) {
            dfso.close();
        }
    }
    return dto;
}
Also used : FSDataInputStream(org.apache.hadoop.fs.FSDataInputStream) InputStream(java.io.InputStream) DistributedFileSystemOps(io.hops.hopsworks.common.hdfs.DistributedFileSystemOps) IOException(java.io.IOException) JobException(io.hops.hopsworks.exceptions.JobException) JobLogDTO(io.hops.hopsworks.common.jobs.JobLogDTO) Project(io.hops.hopsworks.persistence.entity.project.Project)

Example 2 with JobLogDTO

use of io.hops.hopsworks.common.jobs.JobLogDTO in project hopsworks by logicalclocks.

the class ExecutionsResource method retryLog.

@ApiOperation(value = "Retry log aggregation of given execution and type", response = JobLogDTO.class)
@POST
@Path("{id}/log/{type}")
@Produces(MediaType.APPLICATION_JSON)
@AllowedProjectRoles({ AllowedProjectRoles.DATA_OWNER, AllowedProjectRoles.DATA_SCIENTIST })
@JWTRequired(acceptedTokens = { Audience.API, Audience.JOB }, allowedUserRoles = { "HOPS_ADMIN", "HOPS_USER" })
@ApiKeyRequired(acceptedScopes = { ApiScope.JOB }, allowedUserRoles = { "HOPS_ADMIN", "HOPS_USER" })
public Response retryLog(@PathParam("id") Integer id, @PathParam("type") JobLogDTO.LogType type, @Context SecurityContext sc) throws JobException {
    Execution execution = executionController.authorize(job, id);
    JobLogDTO dto = executionController.retryLogAggregation(execution, type);
    return Response.ok().entity(dto).build();
}
Also used : JobLogDTO(io.hops.hopsworks.common.jobs.JobLogDTO) Execution(io.hops.hopsworks.persistence.entity.jobs.history.Execution) Path(javax.ws.rs.Path) POST(javax.ws.rs.POST) Produces(javax.ws.rs.Produces) JWTRequired(io.hops.hopsworks.jwt.annotation.JWTRequired) ApiOperation(io.swagger.annotations.ApiOperation) ApiKeyRequired(io.hops.hopsworks.api.filter.apiKey.ApiKeyRequired) AllowedProjectRoles(io.hops.hopsworks.api.filter.AllowedProjectRoles)

Example 3 with JobLogDTO

use of io.hops.hopsworks.common.jobs.JobLogDTO in project hopsworks by logicalclocks.

the class ExecutionsResource method getLog.

@ApiOperation(value = "Retrieve log of given execution and type", response = JobLogDTO.class)
@GET
@Path("{id}/log/{type}")
@Produces(MediaType.APPLICATION_JSON)
@AllowedProjectRoles({ AllowedProjectRoles.DATA_OWNER, AllowedProjectRoles.DATA_SCIENTIST })
@JWTRequired(acceptedTokens = { Audience.API }, allowedUserRoles = { "HOPS_ADMIN", "HOPS_USER" })
@ApiKeyRequired(acceptedScopes = { ApiScope.JOB }, allowedUserRoles = { "HOPS_ADMIN", "HOPS_USER" })
public Response getLog(@PathParam("id") Integer id, @PathParam("type") JobLogDTO.LogType type, @Context SecurityContext sc) throws JobException {
    Execution execution = executionController.authorize(job, id);
    JobLogDTO dto = executionController.getLog(execution, type);
    return Response.ok().entity(dto).build();
}
Also used : JobLogDTO(io.hops.hopsworks.common.jobs.JobLogDTO) Execution(io.hops.hopsworks.persistence.entity.jobs.history.Execution) Path(javax.ws.rs.Path) Produces(javax.ws.rs.Produces) GET(javax.ws.rs.GET) JWTRequired(io.hops.hopsworks.jwt.annotation.JWTRequired) ApiOperation(io.swagger.annotations.ApiOperation) ApiKeyRequired(io.hops.hopsworks.api.filter.apiKey.ApiKeyRequired) AllowedProjectRoles(io.hops.hopsworks.api.filter.AllowedProjectRoles)

Aggregations

JobLogDTO (io.hops.hopsworks.common.jobs.JobLogDTO)3 AllowedProjectRoles (io.hops.hopsworks.api.filter.AllowedProjectRoles)2 ApiKeyRequired (io.hops.hopsworks.api.filter.apiKey.ApiKeyRequired)2 JWTRequired (io.hops.hopsworks.jwt.annotation.JWTRequired)2 Execution (io.hops.hopsworks.persistence.entity.jobs.history.Execution)2 ApiOperation (io.swagger.annotations.ApiOperation)2 Path (javax.ws.rs.Path)2 Produces (javax.ws.rs.Produces)2 DistributedFileSystemOps (io.hops.hopsworks.common.hdfs.DistributedFileSystemOps)1 JobException (io.hops.hopsworks.exceptions.JobException)1 Project (io.hops.hopsworks.persistence.entity.project.Project)1 IOException (java.io.IOException)1 InputStream (java.io.InputStream)1 GET (javax.ws.rs.GET)1 POST (javax.ws.rs.POST)1 FSDataInputStream (org.apache.hadoop.fs.FSDataInputStream)1