Search in sources :

Example 1 with ProvTypeDTO

use of io.hops.hopsworks.common.provenance.core.dto.ProvTypeDTO in project hopsworks by logicalclocks.

the class ProjectService method example.

@POST
@Path("starterProject/{type}")
@Produces(MediaType.APPLICATION_JSON)
public Response example(@PathParam("type") String type, @Context HttpServletRequest req, @Context SecurityContext sc) throws DatasetException, GenericException, KafkaException, ProjectException, UserException, ServiceException, HopsSecurityException, FeaturestoreException, JobException, IOException, ElasticException, SchemaException, ProvenanceException {
    TourProjectType demoType;
    try {
        demoType = TourProjectType.fromString(type);
    } catch (IllegalArgumentException e) {
        throw new IllegalArgumentException("Type must be one of: " + Arrays.toString(TourProjectType.values()));
    }
    ProjectDTO projectDTO = new ProjectDTO();
    Project project = null;
    projectDTO.setDescription("A demo project for getting started with " + demoType.getDescription());
    Users user = jWTHelper.getUserPrincipal(sc);
    String username = user.getUsername();
    List<String> projectServices = new ArrayList<>();
    // save the project
    String readMeMessage = null;
    switch(demoType) {
        case KAFKA:
            // It's a Kafka guide
            projectDTO.setProjectName("demo_" + TourProjectType.KAFKA.getTourName() + "_" + username);
            populateActiveServices(projectServices, TourProjectType.KAFKA);
            readMeMessage = "jar file to demonstrate Kafka streaming";
            break;
        case SPARK:
            // It's a Spark guide
            projectDTO.setProjectName("demo_" + TourProjectType.SPARK.getTourName() + "_" + username);
            populateActiveServices(projectServices, TourProjectType.SPARK);
            readMeMessage = "jar file to demonstrate the creation of a spark batch job";
            break;
        case FS:
            // It's a Featurestore guide
            projectDTO.setProjectName("demo_" + TourProjectType.FS.getTourName() + "_" + username);
            populateActiveServices(projectServices, TourProjectType.FS);
            readMeMessage = "Dataset containing a jar file and data that can be used to run a sample spark-job for " + "inserting data in the feature store.";
            break;
        case ML:
            // It's a TensorFlow guide
            projectDTO.setProjectName("demo_" + TourProjectType.ML.getTourName() + "_" + username);
            populateActiveServices(projectServices, TourProjectType.ML);
            readMeMessage = "Jupyter notebooks and training data for demonstrating how to run Deep Learning";
            break;
        default:
            throw new IllegalArgumentException("Type must be one of: " + Arrays.toString(TourProjectType.values()));
    }
    projectDTO.setServices(projectServices);
    DistributedFileSystemOps dfso = null;
    DistributedFileSystemOps udfso = null;
    try {
        project = projectController.createProject(projectDTO, user, req.getSession().getId());
        dfso = dfs.getDfsOps();
        username = hdfsUsersBean.getHdfsUserName(project, user);
        udfso = dfs.getDfsOps(username);
        ProvTypeDTO projectMetaStatus = fsProvenanceController.getProjectProvType(user, project);
        String tourFilesDataset = projectController.addTourFilesToProject(user.getEmail(), project, dfso, dfso, demoType, projectMetaStatus);
        // TestJob dataset
        datasetController.generateReadme(udfso, tourFilesDataset, readMeMessage, project.getName());
    } catch (Exception ex) {
        projectController.cleanup(project, req.getSession().getId());
        throw ex;
    } finally {
        if (dfso != null) {
            dfso.close();
        }
        if (udfso != null) {
            dfs.closeDfsClient(udfso);
        }
    }
    return noCacheResponse.getNoCacheResponseBuilder(Response.Status.CREATED).entity(project).build();
}
Also used : TourProjectType(io.hops.hopsworks.common.project.TourProjectType) ProjectDTO(io.hops.hopsworks.common.project.ProjectDTO) Project(io.hops.hopsworks.persistence.entity.project.Project) DistributedFileSystemOps(io.hops.hopsworks.common.hdfs.DistributedFileSystemOps) ArrayList(java.util.ArrayList) Users(io.hops.hopsworks.persistence.entity.user.Users) ProvTypeDTO(io.hops.hopsworks.common.provenance.core.dto.ProvTypeDTO) DatasetException(io.hops.hopsworks.exceptions.DatasetException) FeaturestoreException(io.hops.hopsworks.exceptions.FeaturestoreException) ElasticException(io.hops.hopsworks.exceptions.ElasticException) IOException(java.io.IOException) ServiceException(io.hops.hopsworks.exceptions.ServiceException) UserException(io.hops.hopsworks.exceptions.UserException) ExecutionException(java.util.concurrent.ExecutionException) ProjectException(io.hops.hopsworks.exceptions.ProjectException) JobException(io.hops.hopsworks.exceptions.JobException) GenericException(io.hops.hopsworks.exceptions.GenericException) KafkaException(io.hops.hopsworks.exceptions.KafkaException) HopsSecurityException(io.hops.hopsworks.exceptions.HopsSecurityException) ProvenanceException(io.hops.hopsworks.exceptions.ProvenanceException) SchemaException(io.hops.hopsworks.exceptions.SchemaException) Path(javax.ws.rs.Path) POST(javax.ws.rs.POST) Produces(javax.ws.rs.Produces)

Example 2 with ProvTypeDTO

use of io.hops.hopsworks.common.provenance.core.dto.ProvTypeDTO in project hopsworks by logicalclocks.

the class ProjectController method createProject.

/**
 * Creates a new project(project), the related DIR, the different services in
 * the project, and the master of the
 * project.
 * <p>
 * This needs to be an atomic operation (all or nothing) REQUIRES_NEW will
 * make sure a new transaction is created even
 * if this method is called from within a transaction.
 *
 * @param projectDTO
 * @param owner
 * @param sessionId
 * @return
 */
public Project createProject(ProjectDTO projectDTO, Users owner, String sessionId) throws DatasetException, GenericException, KafkaException, ProjectException, UserException, HopsSecurityException, ServiceException, FeaturestoreException, ElasticException, SchemaException, IOException {
    Long startTime = System.currentTimeMillis();
    // check that the project name is ok
    String projectName = projectDTO.getProjectName();
    FolderNameValidator.isValidProjectName(projectUtils, projectName);
    List<ProjectServiceEnum> projectServices = new ArrayList<>();
    if (projectDTO.getServices() != null) {
        for (String s : projectDTO.getServices()) {
            ProjectServiceEnum se = ProjectServiceEnum.valueOf(s.toUpperCase());
            projectServices.add(se);
        }
    }
    LOGGER.log(Level.FINE, () -> "PROJECT CREATION TIME. Step 1: " + (System.currentTimeMillis() - startTime));
    DistributedFileSystemOps dfso = null;
    Project project = null;
    try {
        dfso = dfs.getDfsOps();
        /*
       * create a project in the database
       * if the creation go through it means that there is no other project with
       * the same name.
       * this project creation act like a lock, no other project can be created
       * with the same name
       * until this project is removed from the database
       */
        try {
            project = createProject(projectName, owner, projectDTO.getDescription(), dfso);
        } catch (EJBException ex) {
            LOGGER.log(Level.WARNING, null, ex);
            Path dummy = new Path("/tmp/" + projectName);
            try {
                dfso.rm(dummy, true);
            } catch (IOException e) {
                LOGGER.log(Level.SEVERE, null, e);
            }
            throw new ProjectException(RESTCodes.ProjectErrorCode.PROJECT_EXISTS, Level.SEVERE, "project: " + projectName, ex.getMessage(), ex);
        }
        LOGGER.log(Level.FINE, "PROJECT CREATION TIME. Step 2 (hdfs): {0}", System.currentTimeMillis() - startTime);
        verifyProject(project, dfso, sessionId);
        LOGGER.log(Level.FINE, "PROJECT CREATION TIME. Step 3 (verify): {0}", System.currentTimeMillis() - startTime);
        // Run the handlers.
        try {
            ProjectHandler.runProjectPreCreateHandlers(projectHandlers, project);
        } catch (ProjectException ex) {
            cleanup(project, sessionId, null, true, owner);
            throw ex;
        }
        List<Future<?>> projectCreationFutures = new ArrayList<>();
        // This is an async call
        try {
            projectCreationFutures.add(certificatesController.generateCertificates(project, owner));
        } catch (Exception ex) {
            cleanup(project, sessionId, projectCreationFutures, true, owner);
            throw new HopsSecurityException(RESTCodes.SecurityErrorCode.CERT_CREATION_ERROR, Level.SEVERE, "project: " + project.getName() + "owner: " + owner.getUsername(), ex.getMessage(), ex);
        }
        String username = hdfsUsersController.getHdfsUserName(project, owner);
        if (username == null || username.isEmpty()) {
            cleanup(project, sessionId, projectCreationFutures, true, owner);
            throw new UserException(RESTCodes.UserErrorCode.USER_WAS_NOT_FOUND, Level.SEVERE, "project: " + project.getName() + "owner: " + owner.getUsername());
        }
        LOGGER.log(Level.FINE, "PROJECT CREATION TIME. Step 4 (certs): {0}", System.currentTimeMillis() - startTime);
        // all the verifications have passed, we can now create the project
        // create the project folder
        ProvTypeDTO provType = settings.getProvType().dto;
        try {
            mkProjectDIR(projectName, dfso);
            fsProvController.updateProjectProvType(project, provType, dfso);
        } catch (IOException | EJBException | ProvenanceException ex) {
            cleanup(project, sessionId, projectCreationFutures, true, owner);
            throw new ProjectException(RESTCodes.ProjectErrorCode.PROJECT_FOLDER_NOT_CREATED, Level.SEVERE, "project: " + projectName, ex.getMessage(), ex);
        }
        LOGGER.log(Level.FINE, "PROJECT CREATION TIME. Step 5 (folders): {0}", System.currentTimeMillis() - startTime);
        // update the project with the project folder inode
        try {
            setProjectInode(project, dfso);
        } catch (IOException | EJBException ex) {
            cleanup(project, sessionId, projectCreationFutures, true, owner);
            throw new ProjectException(RESTCodes.ProjectErrorCode.PROJECT_INODE_CREATION_ERROR, Level.SEVERE, "project: " + projectName, ex.getMessage(), ex);
        }
        LOGGER.log(Level.FINE, "PROJECT CREATION TIME. Step 6 (inodes): {0}", System.currentTimeMillis() - startTime);
        // set payment and quotas
        try {
            setProjectOwnerAndQuotas(project, dfso, owner);
        } catch (IOException | EJBException ex) {
            cleanup(project, sessionId, projectCreationFutures, true, owner);
            throw new ProjectException(RESTCodes.ProjectErrorCode.QUOTA_ERROR, Level.SEVERE, "project: " + project.getName(), ex.getMessage(), ex);
        }
        LOGGER.log(Level.FINE, "PROJECT CREATION TIME. Step 7 (quotas): {0}", System.currentTimeMillis() - startTime);
        try {
            hdfsUsersController.addProjectFolderOwner(project, dfso);
            createProjectLogResources(owner, project, dfso);
        } catch (IOException | EJBException ex) {
            cleanup(project, sessionId, projectCreationFutures);
            throw new ProjectException(RESTCodes.ProjectErrorCode.PROJECT_SET_PERMISSIONS_ERROR, Level.SEVERE, "project: " + projectName, ex.getMessage(), ex);
        }
        LOGGER.log(Level.FINE, "PROJECT CREATION TIME. Step 8 (logs): {0}", System.currentTimeMillis() - startTime);
        // inconsistencies
        try {
            elasticController.deleteProjectIndices(project);
            elasticController.deleteProjectSavedObjects(projectName);
            LOGGER.log(Level.FINE, "PROJECT CREATION TIME. Step 9 (elastic cleanup): {0}", System.currentTimeMillis() - startTime);
        } catch (ElasticException ex) {
            LOGGER.log(Level.FINE, "Error while cleaning old project indices", ex);
        }
        logProject(project, OperationType.Add);
        // enable services
        for (ProjectServiceEnum service : projectServices) {
            try {
                projectCreationFutures.addAll(addService(project, service, owner, dfso, provType));
            } catch (RESTException | IOException ex) {
                cleanup(project, sessionId, projectCreationFutures);
                throw ex;
            }
        }
        try {
            for (Future f : projectCreationFutures) {
                if (f != null) {
                    f.get();
                }
            }
        } catch (InterruptedException | ExecutionException ex) {
            LOGGER.log(Level.SEVERE, "Error while waiting for the certificate generation thread to finish. Will try to " + "cleanup...", ex);
            cleanup(project, sessionId, projectCreationFutures);
            throw new HopsSecurityException(RESTCodes.SecurityErrorCode.CERT_CREATION_ERROR, Level.SEVERE);
        }
        // Run the handlers.
        try {
            ProjectHandler.runProjectPostCreateHandlers(projectHandlers, project);
        } catch (ProjectException ex) {
            cleanup(project, sessionId, projectCreationFutures);
            throw ex;
        }
        try {
            project = environmentController.createEnv(project, owner);
        } catch (PythonException | EJBException ex) {
            cleanup(project, sessionId, projectCreationFutures);
            throw new ProjectException(RESTCodes.ProjectErrorCode.PROJECT_ANACONDA_ENABLE_ERROR, Level.SEVERE, "project: " + projectName, ex.getMessage(), ex);
        }
        LOGGER.log(Level.FINE, "PROJECT CREATION TIME. Step 10 (env): {0}", System.currentTimeMillis() - startTime);
        return project;
    } finally {
        if (dfso != null) {
            dfso.close();
        }
        LOGGER.log(Level.FINE, "PROJECT CREATION TIME. Step 11 (close): {0}", System.currentTimeMillis() - startTime);
    }
}
Also used : RESTException(io.hops.hopsworks.restutils.RESTException) ArrayList(java.util.ArrayList) HopsSecurityException(io.hops.hopsworks.exceptions.HopsSecurityException) ProjectException(io.hops.hopsworks.exceptions.ProjectException) ProvenanceException(io.hops.hopsworks.exceptions.ProvenanceException) PythonException(io.hops.hopsworks.exceptions.PythonException) UserException(io.hops.hopsworks.exceptions.UserException) ExecutionException(java.util.concurrent.ExecutionException) Path(org.apache.hadoop.fs.Path) ElasticException(io.hops.hopsworks.exceptions.ElasticException) DistributedFileSystemOps(io.hops.hopsworks.common.hdfs.DistributedFileSystemOps) IOException(java.io.IOException) ProjectServiceEnum(io.hops.hopsworks.persistence.entity.project.service.ProjectServiceEnum) TensorBoardException(io.hops.hopsworks.exceptions.TensorBoardException) DatasetException(io.hops.hopsworks.exceptions.DatasetException) EJBException(javax.ejb.EJBException) AlertException(io.hops.hopsworks.exceptions.AlertException) PythonException(io.hops.hopsworks.exceptions.PythonException) FeaturestoreException(io.hops.hopsworks.exceptions.FeaturestoreException) RESTException(io.hops.hopsworks.restutils.RESTException) SQLException(java.sql.SQLException) ElasticException(io.hops.hopsworks.exceptions.ElasticException) AlertManagerConfigUpdateException(io.hops.hopsworks.alerting.exceptions.AlertManagerConfigUpdateException) IOException(java.io.IOException) ServiceException(io.hops.hopsworks.exceptions.ServiceException) UserException(io.hops.hopsworks.exceptions.UserException) ExecutionException(java.util.concurrent.ExecutionException) ServingException(io.hops.hopsworks.exceptions.ServingException) AlertManagerResponseException(io.hops.hopsworks.alerting.exceptions.AlertManagerResponseException) CryptoPasswordNotFoundException(io.hops.hopsworks.exceptions.CryptoPasswordNotFoundException) ProjectException(io.hops.hopsworks.exceptions.ProjectException) AlertManagerUnreachableException(io.hops.hopsworks.alert.exception.AlertManagerUnreachableException) AlertManagerConfigReadException(io.hops.hopsworks.alerting.exceptions.AlertManagerConfigReadException) ServiceDiscoveryException(com.logicalclocks.servicediscoverclient.exceptions.ServiceDiscoveryException) JobException(io.hops.hopsworks.exceptions.JobException) GenericException(io.hops.hopsworks.exceptions.GenericException) AlertManagerConfigCtrlCreateException(io.hops.hopsworks.alerting.exceptions.AlertManagerConfigCtrlCreateException) KafkaException(io.hops.hopsworks.exceptions.KafkaException) HopsSecurityException(io.hops.hopsworks.exceptions.HopsSecurityException) YarnException(org.apache.hadoop.yarn.exceptions.YarnException) ProvenanceException(io.hops.hopsworks.exceptions.ProvenanceException) AlertManagerClientCreateException(io.hops.hopsworks.alerting.exceptions.AlertManagerClientCreateException) SchemaException(io.hops.hopsworks.exceptions.SchemaException) JupyterProject(io.hops.hopsworks.persistence.entity.jupyter.JupyterProject) Project(io.hops.hopsworks.persistence.entity.project.Project) Future(java.util.concurrent.Future) EJBException(javax.ejb.EJBException) ProvTypeDTO(io.hops.hopsworks.common.provenance.core.dto.ProvTypeDTO)

Example 3 with ProvTypeDTO

use of io.hops.hopsworks.common.provenance.core.dto.ProvTypeDTO in project hopsworks by logicalclocks.

the class DelaDatasetController method createDataset.

public Dataset createDataset(Users user, Project project, String name, String description) throws DatasetException, HopsSecurityException, ProvenanceException {
    DistributedFileSystemOps dfso = dfs.getDfsOps();
    try {
        ProvTypeDTO projectMetaStatus = fsProvenanceController.getProjectProvType(user, project);
        datasetCtrl.createDataset(user, project, name, description, projectMetaStatus, false, DatasetAccessPermission.EDITABLE, dfso);
        return datasetController.getByProjectAndDsName(project, null, name);
    } finally {
        if (dfso != null) {
            dfso.close();
        }
    }
}
Also used : DistributedFileSystemOps(io.hops.hopsworks.common.hdfs.DistributedFileSystemOps) ProvTypeDTO(io.hops.hopsworks.common.provenance.core.dto.ProvTypeDTO)

Example 4 with ProvTypeDTO

use of io.hops.hopsworks.common.provenance.core.dto.ProvTypeDTO in project hopsworks by logicalclocks.

the class ProjectService method updateProject.

@PUT
@Path("{projectId}")
@Produces(MediaType.APPLICATION_JSON)
@Consumes(MediaType.APPLICATION_JSON)
@AllowedProjectRoles({ AllowedProjectRoles.DATA_OWNER })
public Response updateProject(ProjectDTO projectDTO, @PathParam("projectId") Integer id, @Context SecurityContext sc) throws ProjectException, DatasetException, HopsSecurityException, ServiceException, FeaturestoreException, ElasticException, SchemaException, KafkaException, ProvenanceException, IOException, UserException {
    RESTApiJsonResponse json = new RESTApiJsonResponse();
    Users user = jWTHelper.getUserPrincipal(sc);
    Project project = projectController.findProjectById(id);
    boolean updated = false;
    if (projectController.updateProjectDescription(project, projectDTO.getDescription(), user)) {
        json.setSuccessMessage(ResponseMessages.PROJECT_DESCRIPTION_CHANGED);
        updated = true;
    }
    if (projectController.updateProjectRetention(project, projectDTO.getRetentionPeriod(), user)) {
        json.setSuccessMessage(json.getSuccessMessage() + "\n" + ResponseMessages.PROJECT_RETENTON_CHANGED);
        updated = true;
    }
    if (!projectDTO.getServices().isEmpty()) {
        // Create dfso here and pass them to the different controllers
        DistributedFileSystemOps dfso = dfs.getDfsOps();
        DistributedFileSystemOps udfso = dfs.getDfsOps(hdfsUsersBean.getHdfsUserName(project, user));
        for (String s : projectDTO.getServices()) {
            ProjectServiceEnum se = null;
            se = ProjectServiceEnum.valueOf(s.toUpperCase());
            ProvTypeDTO projectMetaStatus = fsProvenanceController.getProjectProvType(user, project);
            List<Future<?>> serviceFutureList = projectController.addService(project, se, user, dfso, udfso, projectMetaStatus);
            if (serviceFutureList != null) {
                // Wait for the futures
                for (Future f : serviceFutureList) {
                    try {
                        f.get();
                    } catch (InterruptedException | ExecutionException e) {
                        throw new ServiceException(RESTCodes.ServiceErrorCode.SERVICE_GENERIC_ERROR, Level.SEVERE, "service: " + s, e.getMessage(), e);
                    }
                }
                // Service successfully enabled
                json.setSuccessMessage(json.getSuccessMessage() + "\n" + ResponseMessages.PROJECT_SERVICE_ADDED + s);
                updated = true;
            }
        }
        // close dfsos
        if (dfso != null) {
            dfso.close();
        }
        if (udfso != null) {
            dfs.closeDfsClient(udfso);
        }
    }
    if (!updated) {
        json.setSuccessMessage(ResponseMessages.NOTHING_TO_UPDATE);
    }
    return noCacheResponse.getNoCacheResponseBuilder(Response.Status.CREATED).entity(json).build();
}
Also used : DistributedFileSystemOps(io.hops.hopsworks.common.hdfs.DistributedFileSystemOps) Users(io.hops.hopsworks.persistence.entity.user.Users) ProjectServiceEnum(io.hops.hopsworks.persistence.entity.project.service.ProjectServiceEnum) Project(io.hops.hopsworks.persistence.entity.project.Project) ServiceException(io.hops.hopsworks.exceptions.ServiceException) RESTApiJsonResponse(io.hops.hopsworks.api.util.RESTApiJsonResponse) Future(java.util.concurrent.Future) ExecutionException(java.util.concurrent.ExecutionException) ProvTypeDTO(io.hops.hopsworks.common.provenance.core.dto.ProvTypeDTO) Path(javax.ws.rs.Path) Produces(javax.ws.rs.Produces) Consumes(javax.ws.rs.Consumes) AllowedProjectRoles(io.hops.hopsworks.api.filter.AllowedProjectRoles) PUT(javax.ws.rs.PUT)

Example 5 with ProvTypeDTO

use of io.hops.hopsworks.common.provenance.core.dto.ProvTypeDTO in project hopsworks by logicalclocks.

the class DatasetResource method postByPath.

@POST
@Path("{path: .+}")
@Produces(MediaType.APPLICATION_JSON)
@ApiOperation(value = "Post an action on a file, dir or dataset.")
@AllowedProjectRoles({ AllowedProjectRoles.DATA_OWNER, AllowedProjectRoles.DATA_SCIENTIST })
@JWTRequired(acceptedTokens = { Audience.API, Audience.JOB }, allowedUserRoles = { "HOPS_ADMIN", "HOPS_USER" })
@ApiKeyRequired(acceptedScopes = { ApiScope.DATASET_CREATE }, allowedUserRoles = { "HOPS_ADMIN", "HOPS_USER" })
public Response postByPath(@Context UriInfo uriInfo, @Context SecurityContext sc, @PathParam("path") String path, @QueryParam("type") DatasetType datasetType, @QueryParam("target_project") String targetProjectName, @QueryParam("action") DatasetActions.Post action, @QueryParam("description") String description, @QueryParam("searchable") Boolean searchable, @QueryParam("generate_readme") Boolean generateReadme, @QueryParam("destination_path") String destPath, @QueryParam("destination_type") DatasetType destDatasetType, @DefaultValue("READ_ONLY") @QueryParam("permission") DatasetAccessPermission permission) throws DatasetException, ProjectException, HopsSecurityException, ProvenanceException, MetadataException, SchematizedTagException {
    Users user = jwtHelper.getUserPrincipal(sc);
    DatasetPath datasetPath;
    DatasetPath distDatasetPath;
    Project project = this.getProject();
    switch(action == null ? DatasetActions.Post.CREATE : action) {
        case CREATE:
            if (datasetType != null && !datasetType.equals(DatasetType.DATASET)) {
                // can only create dataset
                throw new DatasetException(RESTCodes.DatasetErrorCode.DATASET_OPERATION_INVALID, Level.FINE);
            }
            datasetPath = datasetHelper.getNewDatasetPath(project, path, DatasetType.DATASET);
            if (datasetPath.isTopLevelDataset()) {
                checkIfDataOwner(project, user);
            }
            if (datasetPath.isTopLevelDataset() && !datasetHelper.isBasicDatasetProjectParent(project, datasetPath)) {
                // fake shared dataset with :: in dataset name at dataset creation
                throw new DatasetException(RESTCodes.DatasetErrorCode.DATASET_NAME_INVALID, Level.FINE);
            }
            ProvTypeDTO projectProvCore = fsProvenanceController.getMetaStatus(user, project, searchable);
            ResourceRequest resourceRequest;
            if (datasetPath.isTopLevelDataset()) {
                datasetController.createDirectory(project, user, datasetPath.getFullPath(), datasetPath.getDatasetName(), datasetPath.isTopLevelDataset(), description, Provenance.getDatasetProvCore(projectProvCore, Provenance.MLType.DATASET), generateReadme, permission);
                resourceRequest = new ResourceRequest(ResourceRequest.Name.DATASET);
                Dataset ds = datasetController.getByProjectAndFullPath(project, datasetPath.getFullPath().toString());
                datasetHelper.updateDataset(project, datasetPath, ds);
                datasetPath.setInode(ds.getInode());
                DatasetDTO dto = datasetBuilder.build(uriInfo, resourceRequest, user, datasetPath, null, null, false);
                return Response.created(dto.getHref()).entity(dto).build();
            } else {
                datasetHelper.checkIfDatasetExists(project, datasetPath);
                datasetHelper.updateDataset(project, datasetPath);
                datasetController.createDirectory(project, user, datasetPath.getFullPath(), datasetPath.getDatasetName(), datasetPath.isTopLevelDataset(), description, Provenance.getDatasetProvCore(projectProvCore, Provenance.MLType.DATASET), generateReadme, permission);
                resourceRequest = new ResourceRequest(ResourceRequest.Name.INODES);
                Inode inode = inodeController.getInodeAtPath(datasetPath.getFullPath().toString());
                datasetPath.setInode(inode);
                InodeDTO dto = inodeBuilder.buildStat(uriInfo, resourceRequest, user, datasetPath, inode);
                return Response.created(dto.getHref()).entity(dto).build();
            }
        case COPY:
            datasetPath = datasetHelper.getDatasetPathIfFileExist(project, path, datasetType);
            distDatasetPath = datasetHelper.getDatasetPath(project, destPath, destDatasetType);
            datasetController.copy(project, user, datasetPath.getFullPath(), distDatasetPath.getFullPath(), datasetPath.getDataset(), distDatasetPath.getDataset());
            break;
        case MOVE:
            datasetPath = datasetHelper.getDatasetPathIfFileExist(project, path, datasetType);
            distDatasetPath = datasetHelper.getDatasetPath(project, destPath, destDatasetType);
            datasetController.move(project, user, datasetPath.getFullPath(), distDatasetPath.getFullPath(), datasetPath.getDataset(), distDatasetPath.getDataset());
            break;
        case SHARE:
            checkIfDataOwner(project, user);
            datasetPath = datasetHelper.getDatasetPathIfFileExist(project, path, datasetType);
            datasetController.share(targetProjectName, datasetPath.getFullPath().toString(), permission, project, user);
            break;
        case ACCEPT:
            checkIfDataOwner(project, user);
            datasetPath = datasetHelper.getDatasetPathIfFileExist(project, path, datasetType);
            datasetController.acceptShared(project, user, datasetPath.getDatasetSharedWith());
            break;
        case ZIP:
            datasetPath = datasetHelper.getDatasetPathIfFileExist(project, path, datasetType);
            if (destPath != null) {
                distDatasetPath = datasetHelper.getDatasetPath(project, destPath, destDatasetType);
                datasetController.zip(project, user, datasetPath.getFullPath(), distDatasetPath.getFullPath());
            } else {
                datasetController.zip(project, user, datasetPath.getFullPath(), null);
            }
            break;
        case UNZIP:
            datasetPath = datasetHelper.getDatasetPathIfFileExist(project, path, datasetType);
            if (destPath != null) {
                distDatasetPath = datasetHelper.getDatasetPath(project, destPath, destDatasetType);
                datasetController.unzip(project, user, datasetPath.getFullPath(), distDatasetPath.getFullPath());
            } else {
                datasetController.unzip(project, user, datasetPath.getFullPath(), null);
            }
            break;
        case REJECT:
            checkIfDataOwner(project, user);
            datasetPath = datasetHelper.getDatasetPathIfFileExist(project, path, datasetType);
            datasetController.rejectShared(datasetPath.getDatasetSharedWith());
            break;
        case PUBLISH:
            checkIfDataOwner(project, user);
            datasetPath = datasetHelper.getDatasetPathIfFileExist(project, path, datasetType);
            datasetController.shareWithCluster(project, datasetPath.getDataset(), user, datasetPath.getFullPath());
            break;
        case UNPUBLISH:
            checkIfDataOwner(project, user);
            datasetPath = datasetHelper.getDatasetPathIfFileExist(project, path, datasetType);
            datasetController.unshareFromCluster(project, datasetPath.getDataset(), user, datasetPath.getFullPath());
            break;
        case IMPORT:
            checkIfDataOwner(project, user);
            Project srcProject = projectController.findProjectByName(targetProjectName);
            datasetPath = datasetHelper.getDatasetPathIfFileExist(srcProject, path, datasetType);
            datasetController.share(project.getName(), datasetPath.getFullPath().toString(), DatasetAccessPermission.READ_ONLY, srcProject, user);
            break;
        case UNSHARE_ALL:
            checkIfDataOwner(project, user);
            datasetPath = datasetHelper.getDatasetPathIfFileExist(project, path, datasetType);
            datasetController.unshareAll(datasetPath.getDataset(), user);
            break;
        default:
            throw new WebApplicationException("Action not valid.", Response.Status.NOT_FOUND);
    }
    return Response.noContent().build();
}
Also used : Project(io.hops.hopsworks.persistence.entity.project.Project) Inode(io.hops.hopsworks.persistence.entity.hdfs.inode.Inode) WebApplicationException(javax.ws.rs.WebApplicationException) Dataset(io.hops.hopsworks.persistence.entity.dataset.Dataset) InodeDTO(io.hops.hopsworks.api.dataset.inode.InodeDTO) Users(io.hops.hopsworks.persistence.entity.user.Users) DatasetPath(io.hops.hopsworks.common.dataset.util.DatasetPath) ResourceRequest(io.hops.hopsworks.common.api.ResourceRequest) ProvTypeDTO(io.hops.hopsworks.common.provenance.core.dto.ProvTypeDTO) DatasetException(io.hops.hopsworks.exceptions.DatasetException) Path(javax.ws.rs.Path) DatasetPath(io.hops.hopsworks.common.dataset.util.DatasetPath) POST(javax.ws.rs.POST) Produces(javax.ws.rs.Produces) JWTRequired(io.hops.hopsworks.jwt.annotation.JWTRequired) ApiOperation(io.swagger.annotations.ApiOperation) ApiKeyRequired(io.hops.hopsworks.api.filter.apiKey.ApiKeyRequired) AllowedProjectRoles(io.hops.hopsworks.api.filter.AllowedProjectRoles)

Aggregations

ProvTypeDTO (io.hops.hopsworks.common.provenance.core.dto.ProvTypeDTO)5 DistributedFileSystemOps (io.hops.hopsworks.common.hdfs.DistributedFileSystemOps)4 Project (io.hops.hopsworks.persistence.entity.project.Project)4 DatasetException (io.hops.hopsworks.exceptions.DatasetException)3 ServiceException (io.hops.hopsworks.exceptions.ServiceException)3 Users (io.hops.hopsworks.persistence.entity.user.Users)3 ExecutionException (java.util.concurrent.ExecutionException)3 AllowedProjectRoles (io.hops.hopsworks.api.filter.AllowedProjectRoles)2 ElasticException (io.hops.hopsworks.exceptions.ElasticException)2 FeaturestoreException (io.hops.hopsworks.exceptions.FeaturestoreException)2 GenericException (io.hops.hopsworks.exceptions.GenericException)2 HopsSecurityException (io.hops.hopsworks.exceptions.HopsSecurityException)2 JobException (io.hops.hopsworks.exceptions.JobException)2 KafkaException (io.hops.hopsworks.exceptions.KafkaException)2 ProjectException (io.hops.hopsworks.exceptions.ProjectException)2 ProvenanceException (io.hops.hopsworks.exceptions.ProvenanceException)2 SchemaException (io.hops.hopsworks.exceptions.SchemaException)2 UserException (io.hops.hopsworks.exceptions.UserException)2 ProjectServiceEnum (io.hops.hopsworks.persistence.entity.project.service.ProjectServiceEnum)2 IOException (java.io.IOException)2