What’s New With VMware Hardware Version 11

As vSphere 6.0 and ESX 6.0 got released, also a new Virtual Machine Hardware Version was released.
Hardware Version 11 is the latest available version and is only supported on ESX 6.0 and later.

So, what’s new with VMware Hardware Version 11?

  • Boot Virtual Machines with EFI Support
  • Full integration with VMwarevCloud Air (private cloud)
    • Ability to create and migrate VMs from your local vSphere instance
  • The follow Operating Systems are now fully supported
    • Windows 8.1 Update
    • Windows Server 2012 RC2
    • Ubuntu 14.10
    • Red Hat 7
    • CentOS 7
    • Suse 12
    • OpenSuse 13.2

The list above is just the highlights and a full list including known issues with hardware version 11 can be found here.

Virtual Machine Monitoring For APD & PDL

As I use vCenter 6 more and more, I realize all the amazing new features.

One of the features, I came across was Virtual Machine Responses.
This feature allows you to specify what to do in the event of an APD and PDL. Come on, how cool is this?!?
I bet everyone ran into an APD or PDL situation before and asked him/her-self, why does VMware not offer a feature to restart the VM in such an event?

Virtual Machine Response 01

By default, when vSphere HA enabled, Virtual Machine Monitoring is disabled and so are the responses. However, you do not need to enable Virtual Machine Monitoring for Virtual Machine Responses to work.
The screenshot below show you the available settings for APD vs PDL. As you can see, in the event of an APD, you have 4 options:

  1. Disabled – do nothing and let the machine die
  2. Issue events – Issue a custom event
  3. Power off and restart VMs (conservative), will try to properly shut down the VM and restart it on another host
  4. Power off and restart VMs (aggressive), will forcefully shut down the VM and restart it on another host.

APD

Similar settings are available in the event of PDL.

PDL

 

Important, this feature will NOT protect your VM from losing a RDM and will also not work with vSphere Fault Tolerance (FT)

 

vSphere 6 vMotion Enhancements

Starting with vSphere 6, VMware provided some new enhancements to the existing vMotion capabilities. Let us look at the history of vMotion over the last couple vSphere versions:

  • vSphere 5.0
    • Multi-NIC vMotion, which allows you to dedicate multiple NICs for vMotion
    • SDPS – Stun During Page Send has been introduced. SDPS ensures that vMotion will not fail due to memory copy convergence issues. Previously, vMotion might fail if the virtual machine modifies memory faster than it can be transferred. SDPS will slow down the virtual machine to avoid such a case
  • vSphere 5.1
    • vMotion without shared storage – vMotion can now migrate virtual machines to a different host and datastore simultaneously. Also, the storage device no longer needs to be shared between the source host and destination host

So, what are the new vMotion enhancements in vSphere 6? There are three major enhancements:

vMotion across vCenters

  • Simultaneously change compute, storage, networks and management
  • Leverage vMotion with unshared storage
  • Support local, metro and cross-continental distances

Screen Shot 2014-12-30 at 10.55.05 AM

Requirements for vMotion across vCenter Servers:

  • Supported only starting with vSphere 6
  • Same Single-Sign-On domain for destination vCenter Server instance using UI; different SSO domain possible if you use the API
  • 250 Mbps network bandwidth per vMotion operation

 

vMotion across vSwitches aka x-vSwitch vMotion)

  • x-vSwitch vMotion is fully transparent to the Guest.
  • Required L2 VM network connectivity
  • Transfers Virtual Distributed Switch port metadata
  • Works with a max of virtual switches

Screen Shot 2014-12-30 at 10.49.04 AM

Long-distance vMotion

  • Allows cross-continental vMotion with up to 100ms latency (Round-trip delay time)
  • Does not require vVols
  • Use Cases:
    • Permanent migrations
    • Disaster avoidance
    • SRM/DA testing
    • Multi-site load balancing
  • vMotion network will cross L3 boundaries
  • NFC network, carrying cold traffic, will be configurable

Screen Shot 2014-12-30 at 11.01.00 AM

 

What’s required for Long-distance vMotion?

  • If you use vMotion across multiple vCenters, then vCenters must connect via L3
  • VM network:
    • L2 connection
    • Same VM IP address available at destination
  • vMotion network:
    • L3 connection
    • Secure (dedicated or encrypted)
    • 250 Mbps per vMotion operation
  • NFC network:
    • Routed L3 through Management Network or L2 connection
    • Networking L4-L7 services manually configured at destination

Long-distance vMotion supports Storage Replication Architectures

  • Active-active replicated storage appears as shared storage to the Vm
  • Migration over active-active replication is classic vMotion
  • VVOLs are required for geo distances

vSphere 6 Fault Tolerance

VMware vSphere Fault Tolerance (FT) provides continuous availability for applications on virtual machines.

FT creates a live clone instance of a virtual machine that is always up-to-date with the primary virtual machine. In the event of a host/hardware failure, vSphere Fault Tolerance will automatically trigger a failover, ensuring zero downtime and data loss. VMware vSphere Fault Tolerance utilized heartbeats between the primary virtual machine and the live clone to ensure availability. In case of a failover, a new live clone will be created to deliver continuous protection for the VM.

The VMware vSphere Fault Tolerance FAQ can be found here.

Screen Shot 2014-12-30 at 5.58.55 PM

 

On a first glance, VMware vSphere Fault Tolerance seems like a great addition to vSphere HA Clusters to ensure continuous availability within your VMware vSphere environment.

However, in VMware vCenter Server 4.x and 5.x only one virtual CPU per protected virtual machine is supported. If your VM uses more than one virtual CPU, you will not be able to enable VMware vSphere Fault Tolerance on this machine. Obviously, this is an enormous short-come and explains why many companies are not using VMware’s FT capability.

So what’s new with vSphere 6 in regards to Fault Tolerance?

  • Up to 4 virtual CPUs per virtual machine
  • Up to 64 GB RAM per virtual machine
  • HA, DRS, DPM, SRM and VDS are supported
  • Protection for high performance multi-vCPU VMs
  • Faster check-pointing to keep primary and secondary VM in sync
  • VMs with FT enabled, can now be backed up with vStorage APIs for Data Protection (VADP)

With the new features in vSphere 6, Fault Tolerance will surely get much more traction, since you can finally enable FT on VMs with up to 4 vCPUs.

vSphere 6 NFSv4.1

As most of us know, VMware supports many storage protocols – FC, FCoE, iSCSI and NFS.
However, only NFSv3 was supported in vSphere 4.x and 5.x. NFSv3 has many limitations and shortcomings like:

  • No multipathing support
  • Proprietary advisory locking due to lack of proper locking from protocol
  • Limited security
  • Performance limited by the single server head

Starting with vSphere 6, VMware introduces NFSv4.1. Compared to NFSv3, v4.1 brings a bunch of new features:

  • Session Trunking/Multipathing
    • Increased performance from parallel access (load balancing)
    • Better availability from path failover
  • Improved Security
    • Kerberos, Encryption and Signing is supported
    • User authentication and non-root access becomes available
  • Improved Locking
    • In-band mandatory locks, no longer proprietary advisory locking
  • Better Error Recovery
    • Client and server not state-less any more, with recoverable context
  • Efficient Protocol
    • Less chatty, no file lock heartbeat
    • Session leases

Note: NFSv4.1, does not support SDRS, SIOC, SRM and vVOLs.

Supportability of NFSv3 and NFSv4.1:

  • NFSv3 locking is not compatible with NFS 4.1
    • NFSv3 uses propriety client side locking
    • NFSv4.1 uses server side locking
  • Single protocol accessforadatastore
    • Use either NFSv3 or NFSv4.1 to mount the same NFS share across all ESXi hosts within a vSphere HA cluster
    • Mounting one NFS share as NFSv3 on one ESX host and the same share as NFSv4.1 on another host is not supported!

Kerberos Support for NFSv4.1:

  • NFSv3 only supports AUTH_SYS
  • NFSv4.1 support AUTH_SYS and Kerberos
  • Requires Microsoft AD for KDC
  • Supports RPC header authentication (rpc_gss_svc_none or krb5)
  • Only supports DES-CBC-MD5
    • Weaker but widely used
    • AES-HMAC not supported by many vendors

Implications of using Kerberos:

  • NFSv3 to NFSv4.1
    • Be aware of the uid, gid on the files
    • For NFSv3 the uid & gid will be root
    • Accessing files created with NFSv3 from NFSv4.1 – Kerberized client will result in permission denied errors
  • Always use the same user on all hosts
    • vMotion and other features might fail if two hosts use different users
    • Host Profiles can be used to automate the usage of users