Unmount VMware Datastore – Device Busy

Welcome back, I hope everyone had some time to relax and spend the christmas holidays with their families. I was lucky enough to have some time off and play with my lab.

After playing around with some newly deployed NFS datastores, I tried to unmount them and I got a device busy error and on the CLI I’ve got Sysinfo error on operation returned status : Busy. Please see the VMkernel log for detailed error information.

Let me show you the steps I ran through:

  1. Mounted a newNFSdatastore throughtheESXCLI
    1. esxcli storage nfs add  --host=nas-ip --share=share-name --volume-name=volume_name
  2. List all NFS shares
    1. ~ # esxcli storage nfs list
      Screen Shot 2015-01-12 at 3.04.01 PM
  3. Verify that all VMs on this datastore are either powered off or have been vMotioned to another datastore
  4. Trytounmountthedatastore
    1. esxcli storage nfs remove -v b3c1
      Sysinfo error on operation returned status : Busy. Please see the VMkernel log for detailed error information
  5. Looking through the vmkernel.log, doesn’thelpmuch either. The only messageprintedthere is
    1. 2015-01-12T23:10:09.357Z cpu2:49350 opID=abdf676b)WARNING: NFS: 1921: b3c1 has open files, cannot be unmounted
  6. After some searching, I found this article on VMware
  7. Basically, the issue seems to be that vSphere HA Datastore Heartbeats are  enabled on this datastore, which is causing the device to be busy.

The solution for this problem is pretty simple. Open your vSphere Client and edit the vSphere HA settings, after selecting your vSphere HA Cluster. Within the settings, make sure to set the vSphere HA heartbeats to Use Datastores only from the specified list and deselect your datastore, which you try to unmount.

Screen Shot 2015-01-12 at 2.39.42 PM

After changing the setting, I was able to successfully unmount the NFS share with the following command esxcli storage nfs remove -v datastore_name 

VMware Update Manager 5.5 Installation

Last week I started to setup a VUM – VMware vSphere Update Manager 5.5 to get my ESXi hosts updated and some 3rd party software installed. If you have multiple ESXi hosts and you need an easy way to keep your vSphere environment current, VUM is the way to go.

Additionally, VUM plays quiet nicely with DRS (Distributed Resource Scheduler) to avoid any downtime to your VMs, while applying patches to the hosts. DRS will migrate all active VMs from the host  and then put the host into maintenance mode, one at a time.

For the people who do not know where to get VMware Update Manger from, it is part of the VMware vSphere 5.5 ISO. It took me a while to find it. Once you load the ISO you can find the VMware Update Manager under VMware vCenter Support Tools.

VUM1

The installation is pretty straight forward.

Screen Shot 2014-12-17 at 3.40.27 PM

Once you have launched the installation wizard, click Next and then accept the EUL – End User License Agreement.

Screen Shot 2014-12-17 at 3.40.35 PM

On the next screen, you can select whether you want updates to be automatically downloaded from the default sources after the installation. By default, this option is enabled but can be disabled if you prefer to review the default sources, first.

Screen Shot 2014-12-17 at 3.40.42 PM

Next, you have to specify your vCenter server address, port and credentials in order to register VUM with it.
Note: VCSA – vCenter Server Appliance is also supported since VUM 5.0. However VUM needs to be installed on a Windows Server.

Screen Shot 2014-12-17 at 3.41.04 PM

After you have specified your vCenter credentials, you have to choose if you want to install Microsoft SQL Server 2008 R2 Express instance or to use an already existing database/use a different database type. For smaller vSphere deployments it is ok to use the Microsoft SQL Server. However, if you plan to have more than 5 hosts or 50 VMs, you will be better off with a different database, more information can be found here.

Screen Shot 2014-12-17 at 3.41.12 PM

On the next screen, you can specify whether VUM will be identified by IP address or DNS name. Personally, I always chose IP since VUM would still be accessible even if the DNS server is down. Additionally, you can make changes to the default ports for SOAP, Web and SSL.

Screen Shot 2014-12-17 at 3.41.19 PM

Next, specify the installation folders for the VMware vSphere Update Manager and location for downloading patches.

Note: The patch location should have at least 20 GB free space.

Screen Shot 2014-12-17 at 3.41.31 PM

Now, VUM will start to extract the executable for the Microsoft SQL Server.

Screen Shot 2014-12-17 at 3.41.47 PM

For the actual Microsoft SQL Server installation, you do not need to do anything and it is automated by VUM.

Screen Shot 2014-12-17 at 3.43.08 PM

Last but not least, click Finish to complete the installation.

Screen Shot 2014-12-17 at 3.45.54 PM

Before you can start using VUM, log into your vCenter Server and click on Home. Under Solutions and Appliances, you should be able to see Update Manager.

Screen Shot 2014-12-17 at 3.54.28 PM

If the Update Manager does not show up, go to Plugins -> Manage Plugins and verify that the VMware vSphere Update Manager is enabled. You will need to install the VUM client on your local machine through the Plug-in Manager.

Screen Shot 2014-12-17 at 3.54.33 PM

 

 

 

VMware VVOL – Overview

Over the last couple of months, I tried to get up to speed with all the talk about VMware VVOL and how it will change the virtualization game. Below is a quick summary of what I think will give you a good overview of what will change:

VMWare VVOL Overview

VVOL will no longer use VMFS3 & VMFS5 and will replace datastores as we know them today. VVOL will not be pre-provisioned, unlike LUNs. VVOL are created when you create a VM, power on a VM, clone or snapshot a VM. VASA will be used to manage the underlying storage arrays through ESXi. For the users who are not familiar with VASA, vSphere APIs for Storage Awareness are used to create and provide storage for ESXi hosts on the storage array. Storage arrays will be partitioned into logical containers (Storage Containers). Note, Storage Containers are not LUNs.

Virtual Machine files configuration files will be stored on Storage Containers.
Additionally, main data services will be offloaded to the array. Storage policies can specify the capabilities of the underlying storage array. This allows to enable capabilities like snapshots, replication, deduplication and encryption on a per VM level and not a per datastore level.

VASA Provider (VP)
The VP is software component developed and provided by your storage vendor. The VP is used to present the array-side capabilities to the ESXi host which then can utilize them or assign them to Virtual Machines. The VASA Provider uses the vSphere APIs for Storage which are exported by ESXi. Most storage vendors will have the VP sitting on their controllers. However, VPs can also exist as a VMware Appliance.
Protocol Endpoint (PE)
Protocol Endpoints enable the communication between ESXi hosts and storage array. PEs are managed by the storage administrator and support FC, ISCSI, FCoE and NFS. New PEs will be discovered as part of the rescan process on a ESXi host.
Multi-pathing will work exactly the same way and existing multi-path policies can be applied.
Storage Containers
Storage Containers are not LUNs!
A single VVOL can use multiple storage containers. The capacity is based on the physical storage capacity on your array.
As mentioned before, Storage Policies will be used to represent the capabilities of the underlying array. Storage Policies can be assigned to Storage Containers or VMs. Multiple PEs can access the same Storage Container and every array has at least on Storage Container. Storage Container will remind most people about datastores.
Difference between LUN and Storage Containers?
LUN:
  • Fixed size
  • Needs a FileSystem
  • Can only apply storage capabilities on all VMs provisioned in that LUN
  • Managed by FIleSystem commands
Storage Containers:
  • Size based on array capacity
  • Max number of Storage Containers depends only on the array’s ability
  • Can dynamically be resized (expand & shrink)
  • Can distinguish between storage capabilities for different VMs (Virtual Volumes) provisioned on the same SC
 A detailed overview of VVOL and everything else which will be new, watch this video, by Paudie O’Riordan.

Nimble Storage Fibre Channel & VMware ESXi Setup

This post will cover the integration of a Nimble Storage Fibre Channel array in a VMware ESXi environment. The steps are fairly similar to integrating a Nimble iSCSI array but there are some more FC specific settings which need to be set.

First, go ahead and create a new volume on your array. Go to Manage -> Volumes and click on New Volume. Specify the Volume Name, Description and select the proper Performance Policy for proper block alignment. Next, select the initiator group which has the information of your ESXi host. If you don’t have an initiator group yet, click on New Initiator Group.

Create Volume FC

Name your new initiator group and specify the WWNs of your ESXi hosts. This will allow your hosts to connect to the newly created volume.
Also, specify a unique LUN ID. In this case, I have assigned LUN ID 87.

Screen Shot 2014-11-20 at 8.58.41 PM

 

Screen Shot 2014-11-20 at 9.41.54 PM

Next, specify the size and reservation settings for the volume.

Screen Shot 2014-11-20 at 8.59.45 PM

Specify any protection schedule if required and click on Finish to create the volume.

Screen Shot 2014-11-20 at 9.31.13 PM

Now, the volume is created on the array and your initiators are set-up to allow connection from your FC HBA on the host to connect.
After a rescan of the FC HBA, I can see my LUN with the ID 87.

Screen Shot 2014-11-20 at 9.40.31 PM

 

Looking at the path details for LUN 87, you can 8 paths (2HBA’s x 4 Target Ports). The PSP should be set to NIMBLE_PSP_DIRECTED.
I have 4 Active(I/O) paths and 4 Standby paths. The Active(I/O) paths are going to the active controller and the Standby paths are for the standby controller.

Screen Shot 2014-11-20 at 9.49.51 PM

 

On the array I can now see all 8 paths under Manage -> Connections.

Screen Shot 2014-11-20 at 9.53.17 PM

The volume can now be used as a Raw Device Map or a Datastore. Those were all steps required to get your FC array connected to the an ESXi host, once your zones on your FC switches are configured.

 

Some of the images have been provided by Rich Fenton, one of Nimble’s Sales Engineer from UK.

ESXi Fibre Channel Configuration Maximums

Today we ran into a issue where we could not see all the LUNs presented to our hosts. In this case, we had multiple Nimble arrays connected already. We added 6 more LUNs from one of the Nimbles to our initiator group that is going to the ESXi host. We checked the logged in initiators, the initiator group, and the zones and EVERYTHING appeared to be connected correctly. We have 4 initiators coming out of the ESXi host and all 4 were logging into the fabric, array, and showing connections to the LUNs.  However… not all of the LUNs were showing up under storage adapters on the host.  This is what we saw:

HBA1 sees 0, 1, 2 and 3

HBA2 sees 0, 1 and 2

HBA3 sees 0, 1, 2, 3, 4 and 5

HBA4 sees 0, 1 and 2

Needless to say… That was kind of odd since everything was showing up as logged in. We rescanned multiple times and we restarted the vSphere Client for good measure.  Eventually, we ran the command:  esxcli storage core adapter rescan –all from the command line.  When we did THAT… the system spit out a bunch of errors that were pretty close to this:

The maximum number of supported paths of 1024 has been reached. Path vmhba2:C0:T0:L3 could not be added.
The number of paths allocated has reached the maximum: 1024. Path: vmhba5:C0:T6:L28 will not be allocated. This warning won’t be repeated until some paths are removed.

 If you look here:

 http://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf

You will see that ESXs supports a MAXIMUM of 1024 paths per SERVER.  This is not a per ADAPTER thing…

We’ve seen some of the ESX Maximums in the past with iSCSI but usually we hit the device limit long before we hit the path limit.  

So two things to learn from this:

  • Be aware that the vSphere Client won’t always be verbose about what’s going on.
  • Be aware of the ESXi Configuration Maximums. The way they have it is that whichever limit you hit first wins. It’s not a this, this, and this sorta thing.  It’s a this or this or this kind of thing.