Search in sources :

Example 1 with JobConfiguration

use of io.hops.hopsworks.persistence.entity.jobs.configuration.JobConfiguration in project hopsworks by logicalclocks.

the class JobFacade method updateJobSchedule.

public boolean updateJobSchedule(int jobId, ScheduleDTO schedule) {
    boolean status = false;
    try {
        Jobs managedJob = em.find(Jobs.class, jobId);
        JobConfiguration config = managedJob.getJobConfig();
        config.setSchedule(schedule);
        TypedQuery<Jobs> q = em.createNamedQuery("Jobs.updateConfig", Jobs.class);
        q.setParameter("id", jobId);
        q.setParameter("jobconfig", config);
        int result = q.executeUpdate();
        LOGGER.log(Level.INFO, "Updated entity count = {0}", result);
        if (result == 1) {
            status = true;
        }
    } catch (SecurityException | IllegalArgumentException ex) {
        LOGGER.log(Level.SEVERE, "Could not update job with id:" + jobId);
        throw ex;
    }
    return status;
}
Also used : Jobs(io.hops.hopsworks.persistence.entity.jobs.description.Jobs) JobConfiguration(io.hops.hopsworks.persistence.entity.jobs.configuration.JobConfiguration)

Example 2 with JobConfiguration

use of io.hops.hopsworks.persistence.entity.jobs.configuration.JobConfiguration in project hopsworks by logicalclocks.

the class JobController method inspectProgram.

@TransactionAttribute(TransactionAttributeType.NEVER)
public JobConfiguration inspectProgram(String path, Project project, Users user, JobType jobType) throws JobException {
    DistributedFileSystemOps udfso = null;
    try {
        String username = hdfsUsersBean.getHdfsUserName(project, user);
        udfso = dfs.getDfsOps(username);
        LOGGER.log(Level.FINE, "Inspecting executable job program by {0} at path: {1}", new Object[] { username, path });
        JobConfiguration jobConf = getConfiguration(project, jobType, true);
        switch(jobType) {
            case SPARK:
            case PYSPARK:
                if (Strings.isNullOrEmpty(path) || !(path.endsWith(".jar") || path.endsWith(".py") || path.endsWith(".ipynb"))) {
                    throw new IllegalArgumentException("Path does not point to a .jar, .py or .ipynb file.");
                }
                return sparkController.inspectProgram((SparkJobConfiguration) jobConf, path, udfso);
            case FLINK:
                return jobConf;
            default:
                throw new IllegalArgumentException("Job type not supported: " + jobType);
        }
    } finally {
        if (udfso != null) {
            dfs.closeDfsClient(udfso);
        }
    }
}
Also used : DistributedFileSystemOps(io.hops.hopsworks.common.hdfs.DistributedFileSystemOps) DefaultJobConfiguration(io.hops.hopsworks.persistence.entity.project.jobs.DefaultJobConfiguration) SparkJobConfiguration(io.hops.hopsworks.persistence.entity.jobs.configuration.spark.SparkJobConfiguration) FlinkJobConfiguration(io.hops.hopsworks.persistence.entity.jobs.configuration.flink.FlinkJobConfiguration) JobConfiguration(io.hops.hopsworks.persistence.entity.jobs.configuration.JobConfiguration) TransactionAttribute(javax.ejb.TransactionAttribute)

Example 3 with JobConfiguration

use of io.hops.hopsworks.persistence.entity.jobs.configuration.JobConfiguration in project hopsworks by logicalclocks.

the class JobsResource method inspect.

@ApiOperation(value = "Inspect user program and return a JobConfiguration", response = SparkJobConfiguration.class)
@GET
@Path("{jobtype : python|docker|spark|pyspark|flink}/inspection")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
@AllowedProjectRoles({ AllowedProjectRoles.DATA_OWNER, AllowedProjectRoles.DATA_SCIENTIST })
@JWTRequired(acceptedTokens = { Audience.API, Audience.JOB }, allowedUserRoles = { "HOPS_ADMIN", "HOPS_USER" })
@ApiKeyRequired(acceptedScopes = { ApiScope.JOB }, allowedUserRoles = { "HOPS_ADMIN", "HOPS_USER" })
public Response inspect(@ApiParam(value = "job type", example = "spark") @PathParam("jobtype") JobType jobtype, @ApiParam(value = "path", example = "/Projects/demo_spark_admin000/Resources/spark-examples.jar", required = true) @QueryParam("path") String path, @Context SecurityContext sc) throws JobException {
    Users user = jWTHelper.getUserPrincipal(sc);
    JobConfiguration config = jobController.inspectProgram(path, project, user, jobtype);
    return Response.ok().entity(config).build();
}
Also used : Users(io.hops.hopsworks.persistence.entity.user.Users) SparkJobConfiguration(io.hops.hopsworks.persistence.entity.jobs.configuration.spark.SparkJobConfiguration) JobConfiguration(io.hops.hopsworks.persistence.entity.jobs.configuration.JobConfiguration) Path(javax.ws.rs.Path) Produces(javax.ws.rs.Produces) Consumes(javax.ws.rs.Consumes) GET(javax.ws.rs.GET) JWTRequired(io.hops.hopsworks.jwt.annotation.JWTRequired) ApiOperation(io.swagger.annotations.ApiOperation) ApiKeyRequired(io.hops.hopsworks.api.filter.apiKey.ApiKeyRequired) AllowedProjectRoles(io.hops.hopsworks.api.filter.AllowedProjectRoles)

Aggregations

JobConfiguration (io.hops.hopsworks.persistence.entity.jobs.configuration.JobConfiguration)3 SparkJobConfiguration (io.hops.hopsworks.persistence.entity.jobs.configuration.spark.SparkJobConfiguration)2 AllowedProjectRoles (io.hops.hopsworks.api.filter.AllowedProjectRoles)1 ApiKeyRequired (io.hops.hopsworks.api.filter.apiKey.ApiKeyRequired)1 DistributedFileSystemOps (io.hops.hopsworks.common.hdfs.DistributedFileSystemOps)1 JWTRequired (io.hops.hopsworks.jwt.annotation.JWTRequired)1 FlinkJobConfiguration (io.hops.hopsworks.persistence.entity.jobs.configuration.flink.FlinkJobConfiguration)1 Jobs (io.hops.hopsworks.persistence.entity.jobs.description.Jobs)1 DefaultJobConfiguration (io.hops.hopsworks.persistence.entity.project.jobs.DefaultJobConfiguration)1 Users (io.hops.hopsworks.persistence.entity.user.Users)1 ApiOperation (io.swagger.annotations.ApiOperation)1 TransactionAttribute (javax.ejb.TransactionAttribute)1 Consumes (javax.ws.rs.Consumes)1 GET (javax.ws.rs.GET)1 Path (javax.ws.rs.Path)1 Produces (javax.ws.rs.Produces)1