VMWare Vmotion
VMWare Vmotion
VMware Virtual machine migration is a process of moving a virtual machine (VM) from one host, datastore, or vCenter server to another host, datastore, or vCenter Server. We use this technology for better performance, workload balancing, or as part of preparations to avoid downtime during activities like server maintenance, site level switch-over and other.
Requirements for VMotion
In order to Implement VMotion between hosts, each host must meet below provided basic requirements:
Datastore compatibility: The source and destination hosts servers must use shared storage for vMotion of a virtual machine. We can implement this shared storage using a SAN or iSCSI, and the shared storage may use VMFS or shared NAS. Disks of all virtual machines using VMFS must be available to both source and target hosts.
Network compatibility: VMotion requires a Gigabit Ethernet network. Additionally, virtual machines on source and destination hosts must have access to the same subnets, implying that network labels for each virtual Ethernet adapter should match. You should configure these networks on each ESX host.
CPU compatibility: To support VMotion, we need to make sure that the source and destination hosts have compatible CPUs. You can distinguish different processor versions within the same family by comparing the CPUs' model, stepping level, and extended features.
Type of Vmotion
In VMware virtualization, we can perform multiple types of migrations of virtual machines, below are the types of migrations we can perform:
Cold:
Cold migration is the migration of a powered-off virtual machine. With cold migration, you have the option of moving the associated disks from one datastore to another. The virtual machines are not required to be on shared storage.
If a virtual machine is configured to have a 64-bit guest operating system, vCenter Server generates a warning if you try to migrate it to a host that does not support 64-it operating systems. Otherwise, CPU compatibility checks do not apply when you migrate a virtual machine with cold migration.
Suspended:
Migration of a suspended state virtual machine from current host/datastore or both to a new host or datastore. In this Vmotion Virtual machines can be migrated across vCenter Server.
When you migrate a suspended virtual machine, the new host for the virtual machine must meet CPU compatibility requirements, because the virtual machine must be able to resume execution on the new host.
vMotion:
Migration of a powered-on virtual machine from current host to a new host without downtime and zero data and connectivity loss. Only the CPU and Memory instances on ESXi host gets migrated, virtual machine files on datastore are not migrated.
The vMotion requirements include rules about shared storage, VM affinity, CPU compatibility and cluster organization. You also need to add a vMotion-enabled VMkernel adapter if you use Fibre Channel storage. Once your configuration is all set, it’s easy to request a vMotion live migration with just a few clicks inside the vSphere client.
Storage vMotion:
Migration of a powered-on virtual machine's files from current datastore to a new datastore. In this type ESXi host is not changed, only datastore file location is changed from current datastore to new datastore.
Virtual machines can be migrated across vCenter Server in this type.
VMware EVC (VMware Enhanced vMotion:
VMware EVC is a feature in VMware vSphere that allows virtual machinesto move between ESX/ESXi hosts on different CPUs. VMware EVC hides relevant CPU features that do not match across all vMotion-enabled hosts, such as clock speed or number of cores. This feature works for different versions of CPUs from the same chipmaker. VMware EVC cannot enable vMotions between AMD and Intel processors, however. vCenter Server checks for compatibility between a VM's current and destination hosts before vMotioning the running VM.
Although VMware vSphere allows users to hide CPU features from individual VMs by using CPU compatibility mask settings, it is not recommended. Enhanced vMotion Capability cannot always prevent a VM from accessing CPU features, depending on how applications function on the virtual machine.
vMotion enhancements in different Versions of vSphere
VMware vMotion has been a very successful technology since the beginning because it's been reliable and easy to configure. VMware has introduced more and more different kinds of vMotions. where you can migrate live virtual machines (VMs) to and from cloud providers or to and from your remote datacenter. Another is Storage vMotion where you migrate virtual disks to different storage devices.
The history of vMotion enhancements is quite long, most of the interesting appeared in vSphere 5.x and later versions. Few of them are shared below.
1. vSphere v5.0:
Multi-Nic vMotion: Multi-nic vMotion is the practice of using multiple nics to transfer vMotion data. So the question is that why would you need more nics?
- Really large memory VM’s (memory and execution state have to cross the wire)
- Large storage vMotion jobs without shared storage (that will be across the network)
- Long distance vMotion (vMotion across larger distance than traditional datacenter)
If multi-nic vMotion is configured correctly any vMotion job will be load balanced between all available links, thus increasing the bandwidth available to transfer data. A single machine vMotion can take advantage of the the multiple links. This can really help the speed of vMotions.
Stun During Page Send (SDPS): vSphere 4.1 vMotion enhancement called Quick Resume and now it is replaced with Stun During Page Send, or also often referred as “Slowdown During Page Send” is a feature that “slows down” the vCPU of the virtual machine that is being vMotioned. Simply said, vMotion will track the rate at which the guest pages are changed, or as the engineers prefer to call it, “dirtied”. The rate at which this occurs is compared to the vMotion transmission rate. If the rate at which the pages are dirtied exceeds the transmission rate, the source vCPUs will be placed in a sleep state to decrease the rate at which pages are dirtied and to allow the vMotion process to complete. It is good to know that the vCPUs will only be put to sleep for a few milliseconds at a time at most. SDPS injects frequent, tiny sleeps, disrupting the virtual machine’s workload just enough to guarantee vMotion can keep up with the memory page change rate to allow for a successful and non-disruptive completion of the process.
It is important to realize that SDPS only slows down a virtual machine in the cases where the memory page change rate would have previously caused a vMotion to fail.
2. vSphere v5.1:
vMotion without shared storage - vMotion can now migrate virtual machines to a different host and datastore simultaneously. Also, the storage device no longer needs to be shared between the source host and destination host
In vSphere 5.1 and later, vMotion does not require environments with shared storage. This is useful for performing cross-cluster migrations, when the target cluster machines might not have access to the source cluster's storage. Processes that are working on the virtual machine continue to run during the migration with vMotion.
You can place the virtual machine and all of its disks in a single location or select separate locations for the virtual machine configuration file and each virtual disk.
vMotion without shared storage is useful for virtual infrastructure administration tasks similar to vMotion with shared storage or Storage vMotion tasks.
- Host maintenance. You can move virtual machines off of a host to allow maintenance of the host.
- Storage maintenance and reconfiguration. You can move virtual machines off of a storage device to allow maintenance or reconfiguration of the storage device without virtual machine downtime.
- Storage load redistribution. You can manually redistribute virtual machines or virtual disks to different storage volumes to balance capacity or improve performance.
3. vSphere v5.5:
MetroCluster: MetroCluster allows for synchronous mirroring of volumes between two storage controllers providing storage high availability and disaster recovery. A MetroCluster configuration consists of two NetApp FAS controllers, each residing in the same datacenter or two different physical locations, clustered together. It provides recovery for any single storage component or multiple point failure, and single-command recovery in case of complete site disaster
4. vSphere v6.0
Cross vSwitch vMotion: Allows VMs to move between virtual switches in an infrastructure managed by the same vCenter. This operation is transparent to the guest OS and works across different types of virtual switches (vSS to vSS, vSS to vDS, vDS to vDS).
Cross vCenter vMotion: Allows VMs to move across boundaries of vCenter Server, Datacenter Objects and Folder Objects. This operation simultaneously changes compute, storage, network and vCenter.
Long Distance vMotion: Enable vMotion to operate across long distance (in vSphere 6.0 beta, the maximum supported network round-trip time for vMotion migrations is 100 milliseconds).
Note that all those operations still requires an L2 network connectivity just because it’s a VM migration that does not change the IP of the VM.
And also remember that a live migration requires that the processors of the target host provide the same instructions to the virtual machine after migration that the processors of the source host provided before migration. Clock speed, cache size, and number of cores can differ between source and target processors. However, the processors must come from the same vendor class (AMD or Intel) to be vMotion compatible.
There are also addition requiremens both for hosts and VM but are almost similar than previous version of vMotion (see the vCenter and Host Management Guide for more information). For example:
-
Each host must be correctly licensed for vMotion
-
Each host must meet shared storage requirements for vMotion
-
Each host must meet the networking requirements for vMotion
Drop your feedback
Note : You are required to be logged-in as a user to leave a feedback.