Sunday, 20 March 2016

Cluster and vMotion enhancement in vSphere 6


MSCS Clustering

With vSphere 6 VMware further enhanced the availability features.
vSphere 6 now supports
·         Windows 2012 R2 and SQL 2012 running both in failover cluster mode as well as utilizing AlwaysOn Availability Groups.
·         IPv6 support.
·         Faster PVSCSI adapter when using MSCS.
·         vMotion support for MSCS virtual machines when using Windows 2008 and newer operating systems that are clustered across physical hosts using physical RDM’s.
·         This allows customers to run virtual machines running MSCS on a single host, allowing vMotion and DRS to place the MSCS virtual machines in the vSphere cluster depending on their need.

HA Cluster

§  vSphere HA now includes Virtual Machine Component Protection (VMCP), which provides enhanced protection from All Paths Down (APD) and Permanent Device Loss (PDL) conditions for block (FC, iSCSI, FCoE) and file storage (NFS).
§   vSphere HA can now support  64 ESXi hosts and 8,000 virtual machines which greatly increases the scale of vSphere HA supported environments. It also is fully compatible with VMware Virtual Volumes,
§  VMware vSphere Network I/O Control, IPv6, VMware NSXTM, and cross vCenter Server vSphere vMotion. vSphere HA can now be used in more and larger environments and with less concern for feature compatibility.

For more details please refer the configuration maximum guide.


Fault Tolerance

§  FT support upto 4 vCPUs and 64 GB RAM
§  Fast Check-Pointing, a new Scalable technology is introduced to keep primary and secondary in Sync by replacing “Record-Replay”
§  vSphere 6.0, Supports vMotion of both Primary and Secondary Virtual Machine
§  With vSphere 6.0, You will be able to backup your virtual machines. FT supports for vStorage APIs for Data Protection (VADP) and it also supports all leading VADP solutions in Market like Symantec, EMC, HP ,etc.
§  With vSphere 6.0, FT Supports all Virtual Disk Type like EZT, Thick or Thin Provisioned disks. In vSphere 5.5, it supports only Eager Zeroed Thick.
§  Snapshot of FT configured Virtual Machines are supported with vSphere 6.0
§  New version of FT keeps the Separate copies of VM files like .VMX, .VMDk files to protect primary VM from both Host and Storage failures. You are allowed to keep both Primary and Secondary VM files on different datastore.

For more details about FT please refer the below link.

vMotion

     a.  Cross vSwitch vMotion:- Cross vSwitch vMotion allows you to seamless migrate a virtual machines across different virtual switches while performing a vMotion. This means that you are now longer restricted by the network you created on the vSwitches in order to vMotion a virtual machine. vMotion will work across a mix of switches (standard and distributed). Previously, you could only vMotion from vSS to vSS or within a single vDS. This limitation has been removed. The following Cross vSwitch vMotion migrations are possible:
  • vSS to vSS.
  • vSS to vDS.
  • vDS to vDS.
  • vDS to VSS is not allowed.


   b.  Cross vCenter vMotion:- vSphere 6 also introduces support for Cross vCenter vMotion. vMotion can now perform the following changes simultaneously:
  •  Change compute (vMotion) – Performs the migration of virtual machines across compute hosts.
  •  Change storage (Storage vMotion) – Performs the migration of the virtual machine disks across datastores.
  • Change network (Cross vSwitch vMotion) – Performs the migration of a VM across different virtual switches.
  • Change vCenter (Cross vCenter vMotion) – Performs the migration of the vCenter which manages the VM.

 c. Long Distance vMotion:- With Long Distance vMotion you can now:

§  Migrate VMs across physical servers that spread across a large geographic distance without interruption to applications
§  Perform a permanent migration for VMs in another datacenter.
§  Migrate VMs to another site to avoid imminent disaster.
§  Distribute VMs across sites to balance system load.

DRS Cluster

§  Network-aware DRS – ability to specify bandwidth reservation for important VMs
§  Initial placement based on VM bandwidth reservation
§  Automatic remediation in response to reservation violations due to pNIC saturation, pNIC failure
§  Tight integration with the vMotion team and will do a unified recommendation for cross-vCenter vMotion
§  Runs a combined DRS and SDRS algorithm to generate a tuple (host, DS)
§  CPU, memory, and network reservations are considered as part of admission control
§  All the constraints are respected as part of the placement
§  VM-to-VM affinity and anti-affinity rules are carried over during cross-cluster and cross-vCenter migration
§  Initial placement enforces the affinity and anti-affinity constraints
§  Improved overhead computation – greatly improves the consolidation during power-on