Search in sources :

Example 6 with Migration

use of com.emc.storageos.db.client.model.Migration in project coprhd-controller by CoprHD.

the class BlockService method getVolumeMigrations.

/**
 * Returns a list of the migrations associated with the volume identified by
 * the id specified in the request.
 *
 * @prereq none
 *
 * @param id
 *            the URN of a ViPR volume.
 *
 * @brief Show volume migrations
 * @return A list specifying the id, name, and self link of the migrations
 *         associated with the volume
 */
@GET
@Produces({ MediaType.APPLICATION_XML, MediaType.APPLICATION_JSON })
@Path("/{id}/migrations")
@CheckPermission(roles = { Role.SYSTEM_ADMIN, Role.SYSTEM_MONITOR, Role.TENANT_ADMIN })
public MigrationList getVolumeMigrations(@PathParam("id") URI id) {
    ArgValidator.checkFieldUriType(id, Volume.class, "id");
    MigrationList volumeMigrations = new MigrationList();
    URIQueryResultList migrationURIs = new URIQueryResultList();
    _dbClient.queryByConstraint(ContainmentConstraint.Factory.getMigrationVolumeConstraint(id), migrationURIs);
    Iterator<URI> migrationURIsIter = migrationURIs.iterator();
    while (migrationURIsIter.hasNext()) {
        URI migrationURI = migrationURIsIter.next();
        Migration migration = _permissionsHelper.getObjectById(migrationURI, Migration.class);
        if (BulkList.MigrationFilter.isUserAuthorizedForMigration(migration, getUserFromContext(), _permissionsHelper)) {
            volumeMigrations.getMigrations().add(toNamedRelatedResource(migration, migration.getLabel()));
        }
    }
    return volumeMigrations;
}
Also used : MigrationList(com.emc.storageos.model.block.MigrationList) Migration(com.emc.storageos.db.client.model.Migration) URI(java.net.URI) NullColumnValueGetter.isNullURI(com.emc.storageos.db.client.util.NullColumnValueGetter.isNullURI) URIQueryResultList(com.emc.storageos.db.client.constraint.URIQueryResultList) Path(javax.ws.rs.Path) Produces(javax.ws.rs.Produces) GET(javax.ws.rs.GET) SOURCE_TO_TARGET(com.emc.storageos.model.block.Copy.SyncDirection.SOURCE_TO_TARGET) CheckPermission(com.emc.storageos.security.authorization.CheckPermission)

Example 7 with Migration

use of com.emc.storageos.db.client.model.Migration in project coprhd-controller by CoprHD.

the class VPlexBlockServiceApiImpl method changeVirtualArrayForVolumes.

/**
 * {@inheritDoc}
 */
@Override
public void changeVirtualArrayForVolumes(List<Volume> volumes, BlockConsistencyGroup cg, List<Volume> cgVolumes, VirtualArray newVirtualArray, String taskId) throws InternalException {
    // if they remove the snapshots, they can perform the varray change.
    for (Volume volume : volumes) {
        List<BlockSnapshot> snapshots = getSnapshots(volume);
        if (!snapshots.isEmpty()) {
            for (BlockSnapshot snapshot : snapshots) {
                if (!snapshot.getInactive()) {
                    throw APIException.badRequests.volumeForVarrayChangeHasSnaps(volume.getId().toString());
                }
            }
        }
        // If the volume has mirrors then varray change will not
        // be allowed. User needs to explicitly delete mirrors first.
        // This is applicable for both Local and Distributed volumes.
        // For distributed volume getMirrors will get mirror if any
        // on source or HA side.
        StringSet mirrorURIs = volume.getMirrors();
        if (mirrorURIs != null && !mirrorURIs.isEmpty()) {
            List<VplexMirror> mirrors = _dbClient.queryObject(VplexMirror.class, StringSetUtil.stringSetToUriList(mirrorURIs));
            if (mirrors != null && !mirrors.isEmpty()) {
                throw APIException.badRequests.volumeForVarrayChangeHasMirrors(volume.getId().toString(), volume.getLabel());
            }
        }
    }
    // vpool change.
    if ((cg != null) && (volumes.size() > _maxCgVolumesForMigration)) {
        throw APIException.badRequests.cgContainsTooManyVolumesForVArrayChange(cg.getLabel(), volumes.size(), _maxCgVolumesForMigration);
    }
    // we don't allow the varray change.
    if ((cg != null) && (cg.checkForType(Types.LOCAL)) && (cgVolumes.size() > 1)) {
        verifyTargetSystemsForCGDataMigration(volumes, null, newVirtualArray.getId());
    }
    // Create the volume descriptors for the virtual array change.
    List<VolumeDescriptor> descriptors = createVolumeDescriptorsForVarrayChange(volumes, newVirtualArray, taskId);
    try {
        // Orchestrate the virtual array change.
        BlockOrchestrationController controller = getController(BlockOrchestrationController.class, BlockOrchestrationController.BLOCK_ORCHESTRATION_DEVICE);
        controller.changeVirtualArray(descriptors, taskId);
        s_logger.info("Successfully invoked block orchestrator.");
    } catch (InternalException e) {
        s_logger.error("Controller error", e);
        for (VolumeDescriptor descriptor : descriptors) {
            // migration targets and migrations.
            if (VolumeDescriptor.Type.VPLEX_MIGRATE_VOLUME.equals(descriptor.getType())) {
                _dbClient.error(Volume.class, descriptor.getVolumeURI(), taskId, e);
                _dbClient.error(Migration.class, descriptor.getMigrationId(), taskId, e);
            }
        }
        throw e;
    }
}
Also used : VolumeDescriptor(com.emc.storageos.blockorchestrationcontroller.VolumeDescriptor) BlockOrchestrationController(com.emc.storageos.blockorchestrationcontroller.BlockOrchestrationController) Volume(com.emc.storageos.db.client.model.Volume) Migration(com.emc.storageos.db.client.model.Migration) BlockSnapshot(com.emc.storageos.db.client.model.BlockSnapshot) StringSet(com.emc.storageos.db.client.model.StringSet) VplexMirror(com.emc.storageos.db.client.model.VplexMirror) InternalException(com.emc.storageos.svcs.errorhandling.resources.InternalException)

Example 8 with Migration

use of com.emc.storageos.db.client.model.Migration in project coprhd-controller by CoprHD.

the class BlockDeviceController method addStepsForExpandVolume.

/*
     * Add workflow steps for volume expand.
     */
@Override
public String addStepsForExpandVolume(Workflow workflow, String waitFor, List<VolumeDescriptor> volumeDescriptors, String taskId) throws InternalException {
    // The the list of Volumes that the BlockDeviceController needs to process.
    volumeDescriptors = VolumeDescriptor.filterByType(volumeDescriptors, new VolumeDescriptor.Type[] { VolumeDescriptor.Type.BLOCK_DATA, VolumeDescriptor.Type.RP_SOURCE, VolumeDescriptor.Type.RP_TARGET, VolumeDescriptor.Type.RP_EXISTING_SOURCE, VolumeDescriptor.Type.RP_VPLEX_VIRT_SOURCE, VolumeDescriptor.Type.RP_VPLEX_VIRT_TARGET }, null);
    if (volumeDescriptors == null || volumeDescriptors.isEmpty()) {
        return waitFor;
    }
    Map<URI, Long> volumesToExpand = new HashMap<URI, Long>();
    // Check to see if there are any migrations
    List<Migration> migrations = null;
    if (volumeDescriptors != null) {
        List<VolumeDescriptor> migrateDescriptors = VolumeDescriptor.filterByType(volumeDescriptors, new VolumeDescriptor.Type[] { VolumeDescriptor.Type.VPLEX_MIGRATE_VOLUME }, null);
        if (migrateDescriptors != null && !migrateDescriptors.isEmpty()) {
            // Load the migration objects for use later
            migrations = new ArrayList<Migration>();
            Iterator<VolumeDescriptor> migrationIter = migrateDescriptors.iterator();
            while (migrationIter.hasNext()) {
                Migration migration = _dbClient.queryObject(Migration.class, migrationIter.next().getMigrationId());
                migrations.add(migration);
            }
        }
    }
    for (VolumeDescriptor descriptor : volumeDescriptors) {
        // Grab the volume, let's see if an expand is really needed
        Volume volume = _dbClient.queryObject(Volume.class, descriptor.getVolumeURI());
        // If this volume is a VPLEX volume, check to see if we need to expand its backend volume.
        if (volume.getAssociatedVolumes() != null && !volume.getAssociatedVolumes().isEmpty()) {
            for (String volStr : volume.getAssociatedVolumes()) {
                URI volStrURI = URI.create(volStr);
                Volume associatedVolume = _dbClient.queryObject(Volume.class, volStrURI);
                boolean migrationExists = false;
                // If there are any volumes that are tagged for migration, ignore them.
                if (migrations != null && !migrations.isEmpty()) {
                    for (Migration migration : migrations) {
                        if (migration.getTarget().equals(volume.getId())) {
                            _log.info("Volume [{}] has a migration, ignore this volume for expand.", volume.getLabel());
                            migrationExists = true;
                            break;
                        }
                    }
                }
                // the new size > existing backend volume's provisioned capacity, otherwise we can ignore.
                if (!migrationExists && associatedVolume.getProvisionedCapacity() != null && descriptor.getVolumeSize() > associatedVolume.getProvisionedCapacity().longValue()) {
                    volumesToExpand.put(volStrURI, descriptor.getVolumeSize());
                }
            }
        } else {
            // new size > existing volume's provisioned capacity, otherwise we can ignore.
            if (volume.getProvisionedCapacity() != null && volume.getProvisionedCapacity().longValue() != 0 && descriptor.getVolumeSize() > volume.getProvisionedCapacity().longValue()) {
                volumesToExpand.put(volume.getId(), descriptor.getVolumeSize());
            }
        }
    }
    String nextStep = (volumesToExpand.size() > 0) ? BLOCK_VOLUME_EXPAND_GROUP : waitFor;
    for (Map.Entry<URI, Long> entry : volumesToExpand.entrySet()) {
        _log.info("Creating WF step for Expand Volume for  {}", entry.getKey().toString());
        Volume volumeToExpand = _dbClient.queryObject(Volume.class, entry.getKey());
        StorageSystem storage = _dbClient.queryObject(StorageSystem.class, volumeToExpand.getStorageController());
        String stepId = workflow.createStepId();
        workflow.createStep(BLOCK_VOLUME_EXPAND_GROUP, String.format("Expand Block volume %s", volumeToExpand), waitFor, storage.getId(), getDeviceType(storage.getId()), BlockDeviceController.class, expandVolumesMethod(volumeToExpand.getStorageController(), volumeToExpand.getPool(), volumeToExpand.getId(), entry.getValue()), rollbackExpandVolumeMethod(volumeToExpand.getStorageController(), volumeToExpand.getId(), stepId), stepId);
        _log.info("Creating workflow step {}", BLOCK_VOLUME_EXPAND_GROUP);
    }
    return nextStep;
}
Also used : VolumeDescriptor(com.emc.storageos.blockorchestrationcontroller.VolumeDescriptor) HashMap(java.util.HashMap) Migration(com.emc.storageos.db.client.model.Migration) NamedURI(com.emc.storageos.db.client.model.NamedURI) FCTN_MIRROR_TO_URI(com.emc.storageos.db.client.util.CommonTransformerFunctions.FCTN_MIRROR_TO_URI) URI(java.net.URI) Type(com.emc.storageos.db.client.model.DiscoveredDataObject.Type) LockType(com.emc.storageos.locking.LockType) InterfaceType(com.emc.storageos.db.client.model.StorageProvider.InterfaceType) TechnologyType(com.emc.storageos.db.client.model.BlockSnapshot.TechnologyType) RecordType(com.emc.storageos.volumecontroller.impl.monitoring.cim.enums.RecordType) Volume(com.emc.storageos.db.client.model.Volume) Map(java.util.Map) OpStatusMap(com.emc.storageos.db.client.model.OpStatusMap) HashMap(java.util.HashMap) StorageSystem(com.emc.storageos.db.client.model.StorageSystem)

Example 9 with Migration

use of com.emc.storageos.db.client.model.Migration in project coprhd-controller by CoprHD.

the class VPlexDeviceController method commitMigration.

/**
 * Invoked by the migration workflow to commit the migration after it has
 * been completed.
 *
 * @param vplexURI
 *            The URI of the VPlex storage system.
 * @param virtualVolumeURI
 *            The URI of the virtual volume.
 * @param migrationURI
 *            The URI of the data migration.
 * @param rename
 *            Indicates if the volume should be renamed after commit to
 *            conform to ViPR standard naming conventions.
 * @param newVpoolURI - the new virtual pool for the virtual volume (or null if not changing)
 * @param newVarrayURI - the new varray for the virtual volume (or null if not changing)
 * @param stepId
 *            The workflow step identifier.
 *
 * @throws WorkflowException
 */
public void commitMigration(URI vplexURI, URI virtualVolumeURI, URI migrationURI, Boolean rename, URI newVpoolURI, URI newVarrayURI, String stepId) throws WorkflowException {
    _log.info("Committing migration {}", migrationURI);
    Migration migration = null;
    VPlexApiClient client = null;
    try {
        // Update step state to executing.
        WorkflowStepCompleter.stepExecuting(stepId);
        // Get the migration.
        migration = getDataObject(Migration.class, migrationURI, _dbClient);
        // workflow, so check the status.
        if (!VPlexMigrationInfo.MigrationStatus.COMMITTED.getStatusValue().equals(migration.getMigrationStatus())) {
            // Get the VPlex API client.
            StorageSystem vplexSystem = getDataObject(StorageSystem.class, vplexURI, _dbClient);
            client = getVPlexAPIClient(_vplexApiFactory, vplexSystem, _dbClient);
            _log.info("Got VPlex API client for system {}", vplexURI);
            // Make a call to the VPlex API client to commit the migration.
            // Note that for ingested VPLEX volumes created outside ViPR, we
            // don't want to update the name.
            List<VPlexMigrationInfo> migrationInfoList = new ArrayList<VPlexMigrationInfo>();
            Volume virtualVolume = getDataObject(Volume.class, virtualVolumeURI, _dbClient);
            try {
                migrationInfoList = client.commitMigrations(virtualVolume.getDeviceLabel(), Arrays.asList(migration.getLabel()), true, true, rename.booleanValue());
                _log.info("Committed migration {}", migration.getLabel());
            } catch (VPlexApiException vae) {
                _log.error("Exception committing VPlex migration: " + vae.getMessage(), vae);
                boolean committed = false;
                // Check the migration status. Maybe it committed even though we had an error.
                VPlexMigrationInfo migrationInfo = client.getMigrationInfo(migration.getLabel());
                if (migrationInfo.getStatus().equalsIgnoreCase(VPlexMigrationInfo.MigrationStatus.COMMITTED.name())) {
                    _log.info("Migration {} has committed despite exception", migration.getLabel());
                    migrationInfoList.clear();
                    migrationInfoList.add(migrationInfo);
                    committed = true;
                } else {
                    _log.info("Migration {} status {}", migration.getLabel(), migrationInfo.getStatus());
                }
                if (!committed) {
                    // This was observed at customer site COP-21257
                    if (vae.getServiceCode() == ServiceCode.VPLEX_API_RESPONSE_TIMEOUT_ERROR) {
                        // We are going to throw an error, but we don't want to rollback completely
                        _workflowService.setWorkflowRollbackContOnError(stepId, false);
                    }
                    WorkflowStepCompleter.stepFailed(stepId, vae);
                    return;
                }
            }
            // Below this point migration is committed, no turning back.
            // Initialize the migration info in the database.
            migration.setMigrationStatus(VPlexMigrationInfo.MigrationStatus.COMMITTED.getStatusValue());
            _dbClient.updateObject(migration);
            _log.info("Update migration status to committed");
            // Update the virtual volume native id and associated
            // volumes. Note that we don't update CoS until all
            // commits are successful.
            VPlexVirtualVolumeInfo updatedVirtualVolumeInfo = migrationInfoList.get(0).getVirtualVolumeInfo();
            // update any properties that were changed after migration including deviceLabel, nativeGuid, and nativeId.
            // also, if the updated volume isn't thin-enabled, it is thin-capable, and the target vpool supports thin
            // provisioning, then a call should be made to the VPLEX to flip the thin-enabled flag on for this volume.
            URI targetVolumeUri = migration.getTarget();
            Volume targetVolume = getDataObject(Volume.class, targetVolumeUri, _dbClient);
            if (updatedVirtualVolumeInfo != null) {
                _log.info(String.format("New virtual volume is %s", updatedVirtualVolumeInfo.toString()));
                // if the new virtual volume is thin-capable, but thin-enabled is not true,
                // that means we need to ask the VPLEX to convert it to a thin-enabled volume.
                // this doesn't happen automatically for thick-to-thin data migrations.
                boolean isThinEnabled = updatedVirtualVolumeInfo.isThinEnabled();
                if (!isThinEnabled && VPlexApiConstants.TRUE.equalsIgnoreCase(updatedVirtualVolumeInfo.getThinCapable())) {
                    if (verifyVplexSupportsThinProvisioning(vplexSystem)) {
                        if (null != targetVolume) {
                            _log.info(String.format("migration target Volume is %s", targetVolume.forDisplay()));
                            VirtualPool targetVirtualPool = getDataObject(VirtualPool.class, targetVolume.getVirtualPool(), _dbClient);
                            if (null != targetVirtualPool) {
                                _log.info(String.format("migration target VirtualPool is %s", targetVirtualPool.forDisplay()));
                                boolean doEnableThin = VirtualPool.ProvisioningType.Thin.toString().equalsIgnoreCase(targetVirtualPool.getSupportedProvisioningType());
                                if (doEnableThin) {
                                    _log.info(String.format("the new VirtualPool is thin, requesting VPLEX to enable thin provisioning on %s", updatedVirtualVolumeInfo.getName()));
                                    isThinEnabled = client.setVirtualVolumeThinEnabled(updatedVirtualVolumeInfo);
                                }
                            }
                        }
                    }
                }
                virtualVolume.setDeviceLabel(updatedVirtualVolumeInfo.getName());
                virtualVolume.setNativeId(updatedVirtualVolumeInfo.getPath());
                virtualVolume.setNativeGuid(updatedVirtualVolumeInfo.getPath());
                virtualVolume.setThinlyProvisioned(isThinEnabled);
            }
            // Note that for ingested volumes, there will be no associated volumes
            // at first.
            StringSet assocVolumes = virtualVolume.getAssociatedVolumes();
            if ((assocVolumes != null) && (!assocVolumes.isEmpty())) {
                // For a distributed volume, there could be multiple
                // migrations. When the first completes, there will
                // be no associated volumes. However, when the second
                // completes, there will be associated volumes. However,
                // the migration source could be null.
                URI sourceVolumeUri = migration.getSource();
                if (sourceVolumeUri != null) {
                    assocVolumes.remove(sourceVolumeUri.toString());
                    // Retain any previous RP fields on the new target volume.
                    Volume sourceVolume = getDataObject(Volume.class, sourceVolumeUri, _dbClient);
                    if (sourceVolume != null) {
                        boolean targetUpdated = false;
                        if (NullColumnValueGetter.isNotNullValue(sourceVolume.getRpCopyName())) {
                            targetVolume.setRpCopyName(sourceVolume.getRpCopyName());
                            targetUpdated = true;
                        }
                        if (NullColumnValueGetter.isNotNullValue(sourceVolume.getInternalSiteName())) {
                            targetVolume.setInternalSiteName(sourceVolume.getInternalSiteName());
                            targetUpdated = true;
                        }
                        if (targetUpdated) {
                            _dbClient.updateObject(targetVolume);
                        }
                    }
                }
                assocVolumes.add(migration.getTarget().toString());
            } else {
                // NOTE: Now an ingested volume will have associated volumes.
                // It will no longer be considered an ingested volume.
                assocVolumes = new StringSet();
                assocVolumes.add(migration.getTarget().toString());
                virtualVolume.setAssociatedVolumes(assocVolumes);
            }
            updateMigratedVirtualVolumeVpoolAndVarray(virtualVolume, newVpoolURI, newVarrayURI);
            _dbClient.updateObject(virtualVolume);
            _log.info("Updated virtual volume.");
        } else {
            _log.info("The migration is already committed.");
            // Note that we don't set the device label and native id. If the
            // migration was committed outside of Bourne, the virtual volume
            // will still have the old name. If it was committed through
            // Bourne, these values would already have been update.
            // Regardless, we have to update the vpool, and we update the
            // associated volumes in case it was committed outside of
            // Bourne.
            associateVplexVolumeWithMigratedTarget(migration, virtualVolumeURI);
            _log.info("Updated virtual volume.");
        }
        // Update the workflow step status.
        StringBuilder successMsgBuilder = new StringBuilder();
        successMsgBuilder.append("VPlex System: ");
        successMsgBuilder.append(vplexURI);
        successMsgBuilder.append(" migration: ");
        successMsgBuilder.append(migrationURI);
        successMsgBuilder.append(" was committed");
        _log.info(successMsgBuilder.toString());
        WorkflowStepCompleter.stepSucceded(stepId);
        _log.info("Updated workflow step state to success");
    } catch (VPlexApiException vae) {
        _log.error("Exception committing VPlex migration: " + vae.getMessage(), vae);
        WorkflowStepCompleter.stepFailed(stepId, vae);
    } catch (Exception ex) {
        _log.error("Exception committing VPlex migration: " + ex.getMessage(), ex);
        String opName = ResourceOperationTypeEnum.COMMIT_VOLUME_MIGRATION.getName();
        ServiceError serviceError = VPlexApiException.errors.commitMigrationFailed(opName, ex);
        WorkflowStepCompleter.stepFailed(stepId, serviceError);
    }
}
Also used : ServiceError(com.emc.storageos.svcs.errorhandling.model.ServiceError) Migration(com.emc.storageos.db.client.model.Migration) ArrayList(java.util.ArrayList) VirtualPool(com.emc.storageos.db.client.model.VirtualPool) VPlexVirtualVolumeInfo(com.emc.storageos.vplex.api.VPlexVirtualVolumeInfo) NamedURI(com.emc.storageos.db.client.model.NamedURI) URI(java.net.URI) InternalException(com.emc.storageos.svcs.errorhandling.resources.InternalException) InternalServerErrorException(com.emc.storageos.svcs.errorhandling.resources.InternalServerErrorException) VPlexApiException(com.emc.storageos.vplex.api.VPlexApiException) ControllerException(com.emc.storageos.volumecontroller.ControllerException) IOException(java.io.IOException) URISyntaxException(java.net.URISyntaxException) WorkflowException(com.emc.storageos.workflow.WorkflowException) DatabaseException(com.emc.storageos.db.exceptions.DatabaseException) DeviceControllerException(com.emc.storageos.exceptions.DeviceControllerException) VPlexMigrationInfo(com.emc.storageos.vplex.api.VPlexMigrationInfo) Volume(com.emc.storageos.db.client.model.Volume) VPlexApiException(com.emc.storageos.vplex.api.VPlexApiException) VPlexApiClient(com.emc.storageos.vplex.api.VPlexApiClient) StringSet(com.emc.storageos.db.client.model.StringSet) StorageSystem(com.emc.storageos.db.client.model.StorageSystem)

Example 10 with Migration

use of com.emc.storageos.db.client.model.Migration in project coprhd-controller by CoprHD.

the class VPlexDeviceController method addStepsForMigrateVolumes.

/**
 * Adds steps in the passed workflow to migrate a volume.
 *
 * @param workflow
 * @param vplexURI
 * @param virtualVolumeURI
 * @param targetVolumeURIs
 * @param migrationsMap
 * @param poolVolumeMap
 * @param newVpoolURI
 * @param newVarrayURI
 * @param suspendBeforeCommit
 * @param suspendBeforeDeleteSource
 * @param opId
 * @param waitFor
 * @return
 * @throws InternalException
 */
public String addStepsForMigrateVolumes(Workflow workflow, URI vplexURI, URI virtualVolumeURI, List<URI> targetVolumeURIs, Map<URI, URI> migrationsMap, Map<URI, URI> poolVolumeMap, URI newVpoolURI, URI newVarrayURI, boolean suspendBeforeCommit, boolean suspendBeforeDeleteSource, String opId, String waitFor) throws InternalException {
    try {
        _log.info("VPlex controller migrate volume {} on VPlex {}", virtualVolumeURI, vplexURI);
        String volumeUserLabel = "Label Unknown";
        Volume virtualVolume = getDataObject(Volume.class, virtualVolumeURI, _dbClient);
        if (virtualVolume != null && virtualVolume.getDeviceLabel() != null && virtualVolume.getLabel() != null) {
            volumeUserLabel = virtualVolume.getLabel() + " (" + virtualVolume.getDeviceLabel() + ")";
        }
        // Get the VPlex storage system
        StorageSystem vplexSystem = getDataObject(StorageSystem.class, vplexURI, _dbClient);
        _log.info("Got VPlex system");
        // Create a step to validate the volume and prevent migration if the
        // the ViPR DB does not properly reflect the actual backend volumes.
        // A successful migration will delete the backend source volumes. If
        // the ViPR DB does not correctly reflect the actual backend volume,
        // we could delete a backend volume used by some other VPLEX volume.
        waitFor = createWorkflowStepToValidateVPlexVolume(workflow, vplexSystem, virtualVolumeURI, waitFor);
        Map<URI, Volume> volumeMap = new HashMap<URI, Volume>();
        Map<URI, StorageSystem> storageSystemMap = new HashMap<URI, StorageSystem>();
        for (URI volumeURI : targetVolumeURIs) {
            Volume volume = getDataObject(Volume.class, volumeURI, _dbClient);
            volumeMap.put(volumeURI, volume);
            StorageSystem storageSystem = getDataObject(StorageSystem.class, volume.getStorageController(), _dbClient);
            storageSystemMap.put(volume.getStorageController(), storageSystem);
        }
        // Set the project and tenant.
        Volume firstVolume = volumeMap.values().iterator().next();
        Project vplexProject = VPlexUtil.lookupVplexProject(firstVolume, vplexSystem, _dbClient);
        URI tenantURI = vplexProject.getTenantOrg().getURI();
        _log.info("Project is {}, Tenant is {}", vplexProject.getId(), tenantURI);
        waitFor = createWorkflowStepsForBlockVolumeExport(workflow, vplexSystem, storageSystemMap, volumeMap, vplexProject.getId(), tenantURI, waitFor);
        _log.info("Created workflow steps for volume export.");
        // Now make a migration Step for each passed target to which data
        // for the passed virtual volume will be migrated. The migrations
        // will be done from this controller.
        Iterator<URI> targetVolumeIter = targetVolumeURIs.iterator();
        while (targetVolumeIter.hasNext()) {
            URI targetVolumeURI = targetVolumeIter.next();
            _log.info("Target volume is {}", targetVolumeURI);
            URI migrationURI = migrationsMap.get(targetVolumeURI);
            _log.info("Migration is {}", migrationURI);
            String stepId = workflow.createStepId();
            _log.info("Migration opId is {}", stepId);
            Workflow.Method vplexExecuteMethod = new Workflow.Method(MIGRATE_VIRTUAL_VOLUME_METHOD_NAME, vplexURI, virtualVolumeURI, targetVolumeURI, migrationURI, newVarrayURI);
            Workflow.Method vplexRollbackMethod = new Workflow.Method(RB_MIGRATE_VIRTUAL_VOLUME_METHOD_NAME, vplexURI, migrationURI, stepId);
            _log.info("Creating workflow migration step");
            workflow.createStep(MIGRATION_CREATE_STEP, String.format("VPlex %s migrating to target volume %s.", vplexSystem.getId().toString(), targetVolumeURI.toString()), waitFor, vplexSystem.getId(), vplexSystem.getSystemType(), getClass(), vplexExecuteMethod, vplexRollbackMethod, stepId);
            _log.info("Created workflow migration step");
        }
        // Once the migrations complete, we will commit the migrations.
        // So, now we create the steps to commit the migrations.
        String waitForStep = MIGRATION_CREATE_STEP;
        List<URI> migrationURIs = new ArrayList<URI>(migrationsMap.values());
        List<URI> migrationSources = new ArrayList<URI>();
        Iterator<URI> migrationsIter = migrationsMap.values().iterator();
        while (migrationsIter.hasNext()) {
            URI migrationURI = migrationsIter.next();
            _log.info("Migration is {}", migrationURI);
            Migration migration = getDataObject(Migration.class, migrationURI, _dbClient);
            // The migration source volume may be null for ingested volumes
            // for which we do not know anything about the backend volumes.
            // If we don't know the source, we know we are migrating an
            // ingested volume and we will not want to do any renaming
            // after the commit as we do when migration ViPR create volumes,
            // which adhere to a standard naming convention.
            Boolean rename = Boolean.TRUE;
            if (migration.getSource() != null) {
                migrationSources.add(migration.getSource());
            } else {
                rename = Boolean.FALSE;
            }
            _log.info("Added migration source {}", migration.getSource());
            String stepId = workflow.createStepId();
            _log.info("Commit operation id is {}", stepId);
            Workflow.Method vplexExecuteMethod = new Workflow.Method(COMMIT_MIGRATION_METHOD_NAME, vplexURI, virtualVolumeURI, migrationURI, rename, newVpoolURI, newVarrayURI);
            Workflow.Method vplexRollbackMethod = new Workflow.Method(RB_COMMIT_MIGRATION_METHOD_NAME, migrationURIs, newVpoolURI, newVarrayURI, stepId);
            _log.info("Creating workflow step to commit migration");
            String stepDescription = String.format("migration commit step on VPLEX %s of volume %s", vplexSystem.getSerialNumber(), volumeUserLabel);
            waitForStep = workflow.createStep(MIGRATION_COMMIT_STEP, stepDescription, waitForStep, vplexSystem.getId(), vplexSystem.getSystemType(), getClass(), vplexExecuteMethod, vplexRollbackMethod, suspendBeforeCommit, stepId);
            workflow.setSuspendedStepMessage(stepId, COMMIT_MIGRATION_SUSPEND_MESSAGE);
            _log.info("Created workflow step to commit migration");
        }
        // Create a step that creates a sub workflow to delete the old
        // migration source volumes, which are no longer used by the
        // virtual volume. We also update the virtual volume CoS. If
        // we make it to this step, then all migrations were committed.
        // We do this in a sub workflow because we don't won't to
        // initiate rollback regardless of success or failure.
        String stepId = workflow.createStepId();
        Workflow.Method vplexExecuteMethod = new Workflow.Method(DELETE_MIGRATION_SOURCES_METHOD, vplexURI, virtualVolumeURI, newVpoolURI, newVarrayURI, migrationSources);
        List<String> migrationSourceLabels = new ArrayList<>();
        Iterator<Volume> volumeIter = _dbClient.queryIterativeObjects(Volume.class, migrationSources);
        while (volumeIter.hasNext()) {
            migrationSourceLabels.add(volumeIter.next().getNativeGuid());
        }
        String stepDescription = String.format("post-migration delete of original source backing volumes [%s] associated with virtual volume %s", Joiner.on(',').join(migrationSourceLabels), volumeUserLabel);
        workflow.createStep(DELETE_MIGRATION_SOURCES_STEP, stepDescription, waitForStep, vplexSystem.getId(), vplexSystem.getSystemType(), getClass(), vplexExecuteMethod, null, suspendBeforeDeleteSource, stepId);
        workflow.setSuspendedStepMessage(stepId, DELETE_MIGRATION_SOURCES_SUSPEND_MESSAGE);
        _log.info("Created workflow step to create sub workflow for source deletion");
        return DELETE_MIGRATION_SOURCES_STEP;
    } catch (Exception e) {
        throw VPlexApiException.exceptions.addStepsForChangeVirtualPoolFailed(e);
    }
}
Also used : HashMap(java.util.HashMap) Migration(com.emc.storageos.db.client.model.Migration) ArrayList(java.util.ArrayList) Workflow(com.emc.storageos.workflow.Workflow) NamedURI(com.emc.storageos.db.client.model.NamedURI) URI(java.net.URI) InternalException(com.emc.storageos.svcs.errorhandling.resources.InternalException) InternalServerErrorException(com.emc.storageos.svcs.errorhandling.resources.InternalServerErrorException) VPlexApiException(com.emc.storageos.vplex.api.VPlexApiException) ControllerException(com.emc.storageos.volumecontroller.ControllerException) IOException(java.io.IOException) URISyntaxException(java.net.URISyntaxException) WorkflowException(com.emc.storageos.workflow.WorkflowException) DatabaseException(com.emc.storageos.db.exceptions.DatabaseException) DeviceControllerException(com.emc.storageos.exceptions.DeviceControllerException) Project(com.emc.storageos.db.client.model.Project) Volume(com.emc.storageos.db.client.model.Volume) StorageSystem(com.emc.storageos.db.client.model.StorageSystem)

Aggregations

Migration (com.emc.storageos.db.client.model.Migration)33 InternalException (com.emc.storageos.svcs.errorhandling.resources.InternalException)22 URI (java.net.URI)22 Volume (com.emc.storageos.db.client.model.Volume)20 StorageSystem (com.emc.storageos.db.client.model.StorageSystem)18 URISyntaxException (java.net.URISyntaxException)17 DatabaseException (com.emc.storageos.db.exceptions.DatabaseException)16 DeviceControllerException (com.emc.storageos.exceptions.DeviceControllerException)16 InternalServerErrorException (com.emc.storageos.svcs.errorhandling.resources.InternalServerErrorException)16 ControllerException (com.emc.storageos.volumecontroller.ControllerException)16 VPlexApiException (com.emc.storageos.vplex.api.VPlexApiException)16 WorkflowException (com.emc.storageos.workflow.WorkflowException)16 IOException (java.io.IOException)16 NamedURI (com.emc.storageos.db.client.model.NamedURI)14 ServiceError (com.emc.storageos.svcs.errorhandling.model.ServiceError)12 ArrayList (java.util.ArrayList)10 VPlexApiClient (com.emc.storageos.vplex.api.VPlexApiClient)9 CheckPermission (com.emc.storageos.security.authorization.CheckPermission)8 HashMap (java.util.HashMap)8 VolumeDescriptor (com.emc.storageos.blockorchestrationcontroller.VolumeDescriptor)7