VMware Update Manager 5.5 Installation

Last week I started to setup a VUM – VMware vSphere Update Manager 5.5 to get my ESXi hosts updated and some 3rd party software installed. If you have multiple ESXi hosts and you need an easy way to keep your vSphere environment current, VUM is the way to go.

Additionally, VUM plays quiet nicely with DRS (Distributed Resource Scheduler) to avoid any downtime to your VMs, while applying patches to the hosts. DRS will migrate all active VMs from the host  and then put the host into maintenance mode, one at a time.

For the people who do not know where to get VMware Update Manger from, it is part of the VMware vSphere 5.5 ISO. It took me a while to find it. Once you load the ISO you can find the VMware Update Manager under VMware vCenter Support Tools.

VUM1

The installation is pretty straight forward.

Screen Shot 2014-12-17 at 3.40.27 PM

Once you have launched the installation wizard, click Next and then accept the EUL – End User License Agreement.

Screen Shot 2014-12-17 at 3.40.35 PM

On the next screen, you can select whether you want updates to be automatically downloaded from the default sources after the installation. By default, this option is enabled but can be disabled if you prefer to review the default sources, first.

Screen Shot 2014-12-17 at 3.40.42 PM

Next, you have to specify your vCenter server address, port and credentials in order to register VUM with it.
Note: VCSA – vCenter Server Appliance is also supported since VUM 5.0. However VUM needs to be installed on a Windows Server.

Screen Shot 2014-12-17 at 3.41.04 PM

After you have specified your vCenter credentials, you have to choose if you want to install Microsoft SQL Server 2008 R2 Express instance or to use an already existing database/use a different database type. For smaller vSphere deployments it is ok to use the Microsoft SQL Server. However, if you plan to have more than 5 hosts or 50 VMs, you will be better off with a different database, more information can be found here.

Screen Shot 2014-12-17 at 3.41.12 PM

On the next screen, you can specify whether VUM will be identified by IP address or DNS name. Personally, I always chose IP since VUM would still be accessible even if the DNS server is down. Additionally, you can make changes to the default ports for SOAP, Web and SSL.

Screen Shot 2014-12-17 at 3.41.19 PM

Next, specify the installation folders for the VMware vSphere Update Manager and location for downloading patches.

Note: The patch location should have at least 20 GB free space.

Screen Shot 2014-12-17 at 3.41.31 PM

Now, VUM will start to extract the executable for the Microsoft SQL Server.

Screen Shot 2014-12-17 at 3.41.47 PM

For the actual Microsoft SQL Server installation, you do not need to do anything and it is automated by VUM.

Screen Shot 2014-12-17 at 3.43.08 PM

Last but not least, click Finish to complete the installation.

Screen Shot 2014-12-17 at 3.45.54 PM

Before you can start using VUM, log into your vCenter Server and click on Home. Under Solutions and Appliances, you should be able to see Update Manager.

Screen Shot 2014-12-17 at 3.54.28 PM

If the Update Manager does not show up, go to Plugins -> Manage Plugins and verify that the VMware vSphere Update Manager is enabled. You will need to install the VUM client on your local machine through the Plug-in Manager.

Screen Shot 2014-12-17 at 3.54.33 PM

 

 

 

VMware VVOL – Overview

Over the last couple of months, I tried to get up to speed with all the talk about VMware VVOL and how it will change the virtualization game. Below is a quick summary of what I think will give you a good overview of what will change:

VMWare VVOL Overview

VVOL will no longer use VMFS3 & VMFS5 and will replace datastores as we know them today. VVOL will not be pre-provisioned, unlike LUNs. VVOL are created when you create a VM, power on a VM, clone or snapshot a VM. VASA will be used to manage the underlying storage arrays through ESXi. For the users who are not familiar with VASA, vSphere APIs for Storage Awareness are used to create and provide storage for ESXi hosts on the storage array. Storage arrays will be partitioned into logical containers (Storage Containers). Note, Storage Containers are not LUNs.

Virtual Machine files configuration files will be stored on Storage Containers.
Additionally, main data services will be offloaded to the array. Storage policies can specify the capabilities of the underlying storage array. This allows to enable capabilities like snapshots, replication, deduplication and encryption on a per VM level and not a per datastore level.

VASA Provider (VP)
The VP is software component developed and provided by your storage vendor. The VP is used to present the array-side capabilities to the ESXi host which then can utilize them or assign them to Virtual Machines. The VASA Provider uses the vSphere APIs for Storage which are exported by ESXi. Most storage vendors will have the VP sitting on their controllers. However, VPs can also exist as a VMware Appliance.
Protocol Endpoint (PE)
Protocol Endpoints enable the communication between ESXi hosts and storage array. PEs are managed by the storage administrator and support FC, ISCSI, FCoE and NFS. New PEs will be discovered as part of the rescan process on a ESXi host.
Multi-pathing will work exactly the same way and existing multi-path policies can be applied.
Storage Containers
Storage Containers are not LUNs!
A single VVOL can use multiple storage containers. The capacity is based on the physical storage capacity on your array.
As mentioned before, Storage Policies will be used to represent the capabilities of the underlying array. Storage Policies can be assigned to Storage Containers or VMs. Multiple PEs can access the same Storage Container and every array has at least on Storage Container. Storage Container will remind most people about datastores.
Difference between LUN and Storage Containers?
LUN:
  • Fixed size
  • Needs a FileSystem
  • Can only apply storage capabilities on all VMs provisioned in that LUN
  • Managed by FIleSystem commands
Storage Containers:
  • Size based on array capacity
  • Max number of Storage Containers depends only on the array’s ability
  • Can dynamically be resized (expand & shrink)
  • Can distinguish between storage capabilities for different VMs (Virtual Volumes) provisioned on the same SC
 A detailed overview of VVOL and everything else which will be new, watch this video, by Paudie O’Riordan.

Jumbo Frames – Do It Right

Configuring jumbo frames can be such a pain if it doesn’t get done properly. Over the last couple of years, I have seen many customers having mismatched MTUs due to improperly configured jumbo frames.. If it is done properly, jumbo frames can increase the overall network performance between your hosts and your storage array. It is recommendable to use it if you have 10GbE connection to your storage device. However, if it is not configured properly, jumbo frames quickly become your worst nightmare. I have seen it causing performance issues, drops of connection as well as ESXi hosts losing storage devices.

Now, we all know what kind of issues jumbo frames can cause as well as it is advisable to use it if you have a 10GbE connection to your storage device. However, let’s discuss some details about jumbo frames:

  • Larger than 1500 bytes
  • Many devices support up to 9216 bytes
    • Refer to your switch manual for the proper setting
  • Most people will refer to jumbo frame as a MTU 9000 bytes
  • It often causes a MTU mismatch due to misconfiguration

 

Below’s steps offer guidance on how to setup jumbo frame properly:

Note: I recommend to schedule a maintenance window for this change!

On your Cisco Switch:

Please take a look at this Cisco page which lists the syntax for most of their switches.
Once the switch ports have been configured properly, we can go ahead and change the networking settings on the storage device.

On Nimble OS 1.4.x:

  1. Go to Manage -> Array -> Edit Network Addresses
  2. Change the MTU of your data interfaces from 1500 to jumbo

nimble_1-4-X_jumbo

On Nimble OS 2.x:

  1. Go to Administration -> Network Configuration -> Active Settings -> Subnets
  2. Select your data subnet and click on edit. Change the MTU of your data interfaces from 1500 to jumbo.

NimbleOS2X_jumbo

 

On ESXi 5.x:

  1. Connect to your vCenter using the vSphere Client
  2. Go to Home -> Inventory -> Hosts and Clusters
  3. Select your ESXi host and click on Configuration -> NetworkingESXi_networking
  4. Click on Properties of the vSwitch which you want to configure for jumbo framesvSwitch_properties
  5. Select the vSwitch and click on Edit.
  6. Under “Advanced Properties”, change the MTU from 1500 to 9000 and click ok.vSwitch_Jumbo
  7. Next, select your vmkernel port and click on Edit.
  8. Under “NIC settings” you can change the MTU to 9000.vmk_jumbo
  9. Follow step 7 & 8 for all your vmkernel ports within this vSwitch.

After you changed the settings on your storage device, switch and ESXi host, log in to your ESXi host via SSH and run the following command to verify that jumbo frames are working from end to end:

vmkping -d -s 8972 -I vmkport_with_MTU_9000 storage_data_ip

If the ping succeeds, you’ve configured jumbo frames correctly.

Windows 2012 with e1000e could cause data corruption

A couple of days ago, I spent two hours setting up two Windows Server 2012 VMs on my ESXi 5.1 cluster and tried to get some performance tests done. When copying multiple ISOs across the network between those two VMs, I received an error that neither of my 5 ISOs could be opened on the destination.

After checking the settings of my VMs, I saw that I used the default e1000e vNICs. Apparently, this is a known issue with Windows Server 2012 VMs using e1000e vNICs, running on top of VMware ESXi 5.0 and 5.1.

The scary part is, the e1000e vNIC is the default vNIC by VMware when creating a new VM. This means, if you don’t carefully select the correct vNIC  type when creating your VM, you could potentially run into the data corruption issue.
The easiest workaround is to change the vNIC type from e1000e to e1000 or VMXNET3. However, if you use DHCP, your VM will get a new IP assigned as the DHCP server will recognize the new NIC.

If you prefer not to change the vNIC type, you might just want to disable TCP segmentation offload on the Windows 2012 VMs.
There are three settings which should be changed:

IPv4 Checksum Offload

IPv4_Checksum_Offload

 

Large Send Offload (IPv4)

Large_Send_Offload_IPv4

 

TCP Checksum Offload (IPv4)

TCP_Checksum_Offload

 

Further details can be found in VMware KBA 2058692.

 

Silicon Valley VMUG – Double-Take & VSAN

Today, I attended my first Silicon Valley VMUG at the Biltmore Hotel and Suites in San Jose, CA. Vision Solutions presented their software DoubleTake which provides real-time high availability. Joe Cook, Senior Technical Marketing Manager at VMware, provided an overview of VSAN and its requirements.

VMUG_Silicon_Valley

I took a couple of notes for both presentations and summarized the most important points below:

Double-Take Availability

  • Allow migration P2V, V2P, P2P, V2V cross-hypervisor
  • Provides HW and Application independent failover
  • Monitors availability and provides alerting functionality by SNMP and Email
  • Supports VMware 5.0 and 5.1, as well as Microsoft Hyper-V Server and Role 2008 R2 and 2012
  • Full server migration and failover only available for Windows. Linux version will be available in Q4.

Double-Take Replication

  • Uses byte-level replication which continuously looks out for changes and transfers them
  • Either real-time or scheduled
  • Replication can be throttled

Double-Take Move

  • Provides file and folder migration
  • Does NOT support mounted file shares. Disk needs to show as a local drive

 

VMware Virtual SAN (VSAN) by Joe Cook

Hardware requirements:

  • Any Server on the VMware Compatibility Guide
  • At least 1 of each
    • SAS/SATA/PCIe SSD
    • SAS/NL-SAS/SATA HDD
  • 1Gb/10Gb NIC
  • SAS/SATA Controllers (RAID Controllers must work in “pass-through” or RAID0
  • 4GB to 8GB (preferred) USB, SD Cards

Implementation requirements:

  • Minimum of 3 hosts in a cluster configuration
  • All 3 host must contribute storage
  • vSphere 5.5 U1 or later
  • Maximum of 32 hosts
  • Locally attached disks
    • Magnetic disks (HDD)
    • Flash-based devices (SSD)
  • 1Gb or 10Gb (preferred) Ethernet connectivity

Virtual SAN Datastore

  • Distributed datastore capacity, aggregating disk groups found across multiple hosts within the same vSphere cluster
  • Total capacity is based on magnetic disks (HDDs) only.
  • Flash based devices (SSDs) are dedicated to VSAN’s caching layer

Virtual SAN Network

  • RequiredadedicatedVMkernel interface for Virtual SAN traffic
    • Used for intra-cluster communication and data replication
  • Standard and Distributed vSwitches are supported
  • NIC teaming – used for availability not for bandwidth
  • Layer 2 Multicast must be enabled on physical switches

Virtual SAN Scalable Architecture

  • VSAN provides scale up and scale out architecture
    • HDDs are used for capacity
    • SSDs are used for performance
    • Disk Groups are used for performance and capacity
    • Nodes are used for compute capacity

Additional information

  • VSAN is a cluster level feature like DRS and HA
  • VSAN will be deployed, configured and manages through the vSphere Web Client only
  • Hands-on labs are available here