Search in sources :

Example 1 with MigrateAnswer

use of com.cloud.legacymodel.communication.answer.MigrateAnswer in project cosmic by MissionCriticalCloud.

the class CitrixMigrateCommandWrapper method execute.

@Override
public Answer execute(final MigrateCommand command, final CitrixResourceBase citrixResourceBase) {
    final Connection conn = citrixResourceBase.getConnection();
    final String vmName = command.getVmName();
    final String dstHostIpAddr = command.getDestinationIp();
    try {
        final Set<VM> vms = VM.getByNameLabel(conn, vmName);
        final Set<Host> hosts = Host.getAll(conn);
        Host dsthost = null;
        if (hosts != null) {
            for (final Host host : hosts) {
                if (host.getAddress(conn).equals(dstHostIpAddr)) {
                    dsthost = host;
                    break;
                }
            }
        }
        if (dsthost == null) {
            final String msg = "Migration failed due to unable to find host " + dstHostIpAddr + " in XenServer pool " + citrixResourceBase.getHost().getPool();
            s_logger.warn(msg);
            return new MigrateAnswer(command, false, msg, null);
        }
        for (final VM vm : vms) {
            final Set<VBD> vbds = vm.getVBDs(conn);
            for (final VBD vbd : vbds) {
                final VBD.Record vbdRec = vbd.getRecord(conn);
                if (vbdRec.type.equals(Types.VbdType.CD) && !vbdRec.empty) {
                    vbd.eject(conn);
                    // for config drive vbd destroy the vbd.
                    if (!vbdRec.userdevice.equals(citrixResourceBase._attachIsoDeviceNum)) {
                        if (vbdRec.currentlyAttached) {
                            vbd.destroy(conn);
                        }
                    }
                    continue;
                }
            }
            citrixResourceBase.migrateVM(conn, dsthost, vm, vmName);
            vm.setAffinity(conn, dsthost);
        }
        // Attach the config drive iso device to VM
        if (!citrixResourceBase.attachConfigDriveToMigratedVm(conn, vmName, dstHostIpAddr)) {
            s_logger.debug("Config drive ISO attach failed after migration for vm " + vmName);
        }
        return new MigrateAnswer(command, true, "migration succeeded", null);
    } catch (final Exception e) {
        s_logger.warn(e.getMessage(), e);
        return new MigrateAnswer(command, false, e.getMessage(), null);
    }
}
Also used : MigrateAnswer(com.cloud.legacymodel.communication.answer.MigrateAnswer) VM(com.xensource.xenapi.VM) Connection(com.xensource.xenapi.Connection) VBD(com.xensource.xenapi.VBD) Host(com.xensource.xenapi.Host)

Example 2 with MigrateAnswer

use of com.cloud.legacymodel.communication.answer.MigrateAnswer in project cosmic by MissionCriticalCloud.

the class LibvirtMigrateCommandWrapper method execute.

@Override
public Answer execute(final MigrateCommand command, final LibvirtComputingResource libvirtComputingResource) {
    final String vmName = command.getVmName();
    String result = null;
    List<LibvirtVmDef.InterfaceDef> ifaces = null;
    final List<LibvirtDiskDef> disks;
    Domain dm = null;
    Connect dconn = null;
    Domain destDomain = null;
    Connect conn = null;
    final String xmlDesc;
    List<Ternary<String, Boolean, String>> vmsnapshots = null;
    try {
        final LibvirtUtilitiesHelper libvirtUtilitiesHelper = libvirtComputingResource.getLibvirtUtilitiesHelper();
        conn = libvirtUtilitiesHelper.getConnectionByVmName(vmName);
        ifaces = libvirtComputingResource.getInterfaces(conn, vmName);
        disks = libvirtComputingResource.getDisks(conn, vmName);
        dm = conn.domainLookupByName(vmName);
        VirtualMachineTO to = command.getVirtualMachine();
        /*
             * We replace the private IP address with the address of the destination host. This is because the VNC listens on
             * the private IP address of the hypervisor, but that address is ofcourse different on the target host.
             *
             * MigrateCommand.getDestinationIp() returns the private IP address of the target hypervisor. So it's safe to use.
             *
             * The Domain.migrate method from libvirt supports passing a different XML description for the instance to be used
             * on the target host.
             *
             * This is supported by libvirt-java from version 0.50.0
             *
             * CVE-2015-3252: Get XML with sensitive information suitable for migration by using VIR_DOMAIN_XML_MIGRATABLE
             * flag (value = 8) https://libvirt.org/html/libvirt-libvirt-domain.html#virDomainXMLFlags
             *
             * Use VIR_DOMAIN_XML_SECURE (value = 1) prior to v1.0.0.
             */
        // 1000000 equals v1.0.0
        final int xmlFlag = conn.getLibVirVersion() >= 1000000 ? 8 : 1;
        final String target = command.getDestinationIp();
        // xmlDesc = dm.getXMLDesc(xmlFlag).replace(libvirtComputingResource.getPrivateIp(), command.getDestinationIp());
        String vncPassword = to.getVncPassword();
        xmlDesc = replaceIpForVNCInDescFileAndNormalizePassword(dm.getXMLDesc(xmlFlag), target, vncPassword);
        // delete the metadata of vm snapshots before migration
        vmsnapshots = libvirtComputingResource.cleanVMSnapshotMetadata(dm);
        dconn = libvirtUtilitiesHelper.retrieveQemuConnection("qemu+tcp://" + command.getDestinationIp() + "/system");
        // run migration in thread so we can monitor it
        s_logger.info("Live migration of instance " + vmName + " initiated");
        final ExecutorService executor = Executors.newFixedThreadPool(1);
        final Callable<Domain> worker = new MigrateKvmAsync(libvirtComputingResource, dm, dconn, xmlDesc, vmName, command.getDestinationIp());
        final Future<Domain> migrateThread = executor.submit(worker);
        executor.shutdown();
        long sleeptime = 0;
        while (!executor.isTerminated()) {
            Thread.sleep(100);
            sleeptime += 100;
            if (sleeptime == 1000) {
                final int migrateDowntime = libvirtComputingResource.getMigrateDowntime();
                if (migrateDowntime > 0) {
                    try {
                        final int setDowntime = dm.migrateSetMaxDowntime(migrateDowntime);
                        if (setDowntime == 0) {
                            s_logger.debug("Set max downtime for migration of " + vmName + " to " + String.valueOf(migrateDowntime) + "ms");
                        }
                    } catch (final LibvirtException e) {
                        s_logger.debug("Failed to set max downtime for migration, perhaps migration completed? Error: " + e.getMessage());
                    }
                }
            }
            if (sleeptime % 1000 == 0) {
                s_logger.info("Waiting for migration of " + vmName + " to complete, waited " + sleeptime + "ms");
            }
            // pause vm if we meet the vm.migrate.pauseafter threshold and not already paused
            final int migratePauseAfter = libvirtComputingResource.getMigratePauseAfter();
            if (migratePauseAfter > 0 && sleeptime > migratePauseAfter) {
                DomainState state = null;
                try {
                    state = dm.getInfo().state;
                } catch (final LibvirtException e) {
                    s_logger.info("Couldn't get VM domain state after " + sleeptime + "ms: " + e.getMessage());
                }
                if (state != null && state == DomainState.VIR_DOMAIN_RUNNING) {
                    try {
                        s_logger.info("Pausing VM " + vmName + " due to property vm.migrate.pauseafter setting to " + migratePauseAfter + "ms to complete migration");
                        dm.suspend();
                    } catch (final LibvirtException e) {
                        // pause could be racy if it attempts to pause right when vm is finished, simply warn
                        s_logger.info("Failed to pause vm " + vmName + " : " + e.getMessage());
                    }
                }
            }
        }
        s_logger.info("Migration thread for " + vmName + " is done");
        destDomain = migrateThread.get(10, TimeUnit.SECONDS);
        if (destDomain != null) {
            for (final LibvirtDiskDef disk : disks) {
                libvirtComputingResource.cleanupDisk(disk);
            }
        }
    } catch (final LibvirtException e) {
        s_logger.debug("Can't migrate domain: " + e.getMessage());
        result = e.getMessage();
    } catch (final InterruptedException e) {
        s_logger.debug("Interrupted while migrating domain: " + e.getMessage());
        result = e.getMessage();
    } catch (final ExecutionException e) {
        s_logger.debug("Failed to execute while migrating domain: " + e.getMessage());
        result = e.getMessage();
    } catch (final TimeoutException e) {
        s_logger.debug("Timed out while migrating domain: " + e.getMessage());
        result = e.getMessage();
    } finally {
        try {
            if (dm != null && result != null) {
                // restore vm snapshots in case of failed migration
                if (vmsnapshots != null) {
                    libvirtComputingResource.restoreVMSnapshotMetadata(dm, vmName, vmsnapshots);
                }
            }
            if (dm != null) {
                if (dm.isPersistent() == 1) {
                    dm.undefine();
                }
                dm.free();
            }
            if (dconn != null) {
                dconn.close();
            }
            if (destDomain != null) {
                destDomain.free();
            }
        } catch (final LibvirtException e) {
            s_logger.trace("Ignoring libvirt error.", e);
        }
    }
    if (result == null) {
        for (final LibvirtVmDef.InterfaceDef iface : ifaces) {
            // We don't know which "traffic type" is associated with
            // each interface at this point, so inform all vif drivers
            final List<VifDriver> allVifDrivers = libvirtComputingResource.getAllVifDrivers();
            for (final VifDriver vifDriver : allVifDrivers) {
                vifDriver.unplug(iface);
            }
        }
    }
    return new MigrateAnswer(command, result == null, result, null);
}
Also used : LibvirtException(org.libvirt.LibvirtException) VirtualMachineTO(com.cloud.legacymodel.to.VirtualMachineTO) MigrateAnswer(com.cloud.legacymodel.communication.answer.MigrateAnswer) LibvirtDiskDef(com.cloud.agent.resource.kvm.xml.LibvirtDiskDef) ExecutionException(java.util.concurrent.ExecutionException) TimeoutException(java.util.concurrent.TimeoutException) LibvirtVmDef(com.cloud.agent.resource.kvm.xml.LibvirtVmDef) Ternary(com.cloud.legacymodel.utils.Ternary) Connect(org.libvirt.Connect) VifDriver(com.cloud.agent.resource.kvm.vif.VifDriver) DomainState(org.libvirt.DomainInfo.DomainState) ExecutorService(java.util.concurrent.ExecutorService) Domain(org.libvirt.Domain) MigrateKvmAsync(com.cloud.agent.resource.kvm.async.MigrateKvmAsync)

Aggregations

MigrateAnswer (com.cloud.legacymodel.communication.answer.MigrateAnswer)2 MigrateKvmAsync (com.cloud.agent.resource.kvm.async.MigrateKvmAsync)1 VifDriver (com.cloud.agent.resource.kvm.vif.VifDriver)1 LibvirtDiskDef (com.cloud.agent.resource.kvm.xml.LibvirtDiskDef)1 LibvirtVmDef (com.cloud.agent.resource.kvm.xml.LibvirtVmDef)1 VirtualMachineTO (com.cloud.legacymodel.to.VirtualMachineTO)1 Ternary (com.cloud.legacymodel.utils.Ternary)1 Connection (com.xensource.xenapi.Connection)1 Host (com.xensource.xenapi.Host)1 VBD (com.xensource.xenapi.VBD)1 VM (com.xensource.xenapi.VM)1 ExecutionException (java.util.concurrent.ExecutionException)1 ExecutorService (java.util.concurrent.ExecutorService)1 TimeoutException (java.util.concurrent.TimeoutException)1 Connect (org.libvirt.Connect)1 Domain (org.libvirt.Domain)1 DomainState (org.libvirt.DomainInfo.DomainState)1 LibvirtException (org.libvirt.LibvirtException)1